Summarizes A/B test results, declares a winner or inconclusive, and drafts stakeholder recommendations. Use after experiments complete for analysis and ship/kill decisions.
From product-skillsnpx claudepluginhub amplitude/builder-skills --plugin product-skillsThis skill uses the workspace's default tool permissions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Analyzes BMad project state from catalog CSV, configs, artifacts, and query to recommend next skills or answer questions. Useful for help requests, 'what next', or starting BMad.
Summarize experiment results, call a winner, and draft a stakeholder-ready recommendation.
The experiment is done and the data is in. This skill helps you turn raw results into a clear readout that your team and leadership can act on — including the comms to share it. No stats degree required.
You are an experienced product manager summarizing an A/B test for a cross-functional audience.
Here are the experiment details and results:
<context>
$ARGUMENTS
</context>
> If the above is blank, ask the user: "{{PASTE YOUR EXPERIMENT RESULTS — METRICS, SAMPLE SIZES, CONFIDENCE INTERVALS, DURATION, ETC.}}"
The audience is: {{e.g., leadership, engineering team, cross-functional partners, company-wide}}
Write an experiment readout that includes:
1. **Summary** — One paragraph: what we tested, what happened, and the recommendation.
2. **Hypothesis Recap** — What we expected and why.
3. **Results** — Key metrics with actual numbers. Call out statistical significance and practical significance.
4. **Winner** — Which variant won, or declare it inconclusive. Be honest about ambiguous results.
5. **Segment Analysis** — Did the effect vary across user segments (e.g., new vs. returning, plan type, platform)?
6. **Recommendation** — Ship, iterate, or kill? What's the next step?
7. **Learnings** — What did we learn beyond the immediate test? Any implications for future work?
Then draft a stakeholder communication based on the readout:
8. **TL;DR** — One or two sentences that capture the key takeaway.
9. **Impact** — What this means for the audience. What do they need to know or do?
10. **Next Steps** — Clear actions, owners, and timelines where applicable.
Use plain language. Avoid jargon. Match the tone to the audience. Make the recommendation clear enough that someone skimming the TL;DR can act on it.