From magic-powers
Cross-reference analytics, experiments, session replays, and feedback to surface highest-impact product improvements. Uses mcp__Amplitude__query_amplitude_data, mcp__Amplitude__get_session_replays, mcp__Amplitude__get_feedback_insights.
npx claudepluginhub kienbui1995/magic-powers --plugin magic-powersThis skill uses the workspace's default tool permissions.
- Quarterly planning requires a data-driven list of the highest-impact improvements to prioritize
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Before pulling any data, establish clear boundaries for the discovery:
The tighter the scope, the more actionable the output. Discovery across "the entire product" produces generic recommendations.
Use mcp__Amplitude__query_amplitude_data to identify where users drop off, disengage, or fail to return.
Quantitative signals to investigate:
For each signal, quantify the scope: how many users are affected, what percentage of total users, what is the revenue or retention impact if this is fixed?
Use mcp__Amplitude__get_session_replays to watch recordings of users who exhibit the patterns found in Phase 2. Focus on:
Watch 5-10 sessions per behavioral pattern. You are looking for the recurring theme — the one thing that appears in 3+ sessions. One session is an outlier; three sessions is a pattern.
Use mcp__Amplitude__get_feedback_insights to surface themes from user feedback (in-app surveys, NPS comments, support tickets). Cross-reference feedback themes with the quantitative drop-off points: when both sources point to the same problem, confidence is high.
Before recommending a solution, review what has already been tested. Use mcp__Amplitude__get_experiments (via analyze-experiment skill if needed) to understand:
Avoid recommending solutions that have already been disproven by experiment data. This is critical context that prevents wasted investment.
For each opportunity identified, score it using the ICE framework:
ICE Score = Impact × Confidence × Ease (each scored 1-10)
ICE Score = I × C × E (max 1000). Rank all opportunities by ICE score.
For each opportunity in the ranked list, write an opportunity brief:
Opportunity: [Name]
Evidence: [Quantitative signal + qualitative signal + experiment learning]
ICE Score: [I=X, C=X, E=X, Total=XXX]
Hypothesis: If we [do X], then [metric Y] will improve by [Z%] because [reason].
Suggested next step: [Build prototype / Run experiment / Fix bug / User interview]
mcp__Amplitude__query_amplitude_data — pull funnel drop-off, feature usage, churn signalsmcp__Amplitude__get_session_replays — find and review session recordings of struggling usersmcp__Amplitude__get_feedback_insights — surface themes from user feedbackmcp__Amplitude__get_experiments — review what has already been tested in the product areamcp__Amplitude__get_context — get projectId and organization context (always first)mcp__Amplitude__get_feedback_comments — access specific feedback verbatim quotes for evidenceThe output is an opportunity brief — a prioritized list of product improvements, each backed by multi-source evidence.
Structure: