Documents A/B test or experiment results with statistical analysis, segment insights, learnings, recommendations, and next steps after experiments conclude.
From pm-skillsnpx claudepluginhub product-on-purpose/pm-skillsThis skill uses the workspace's default tool permissions.
references/EXAMPLE.mdreferences/TEMPLATE.mdEnables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Guides agentic engineering workflows: eval-first loops, 15-min task decomposition, model routing (Haiku/Sonnet/Opus), AI code reviews, and cost tracking.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
An experiment results document captures what happened when you tested a hypothesis, including statistical outcomes, segment analysis, learnings, and clear recommendations. Good results documentation turns individual experiments into organizational knowledge that improves future decision-making.
When asked to document experiment results, follow these steps:
Summarize the Experiment Provide context: what was tested, when it ran, how much traffic it received. Link to the original experiment design document if one exists.
Restate the Hypothesis Remind readers what you believed would happen and why. This frames the results interpretation.
Present Primary Results Show the primary metric outcome clearly: what were the values for control and treatment? Include statistical significance (p-value), confidence intervals, and sample sizes. Be honest about whether results are conclusive.
Analyze Secondary Metrics Present guardrail metrics that ensure you didn't cause unintended harm. Note any secondary metrics that moved unexpectedly—both positive and negative.
Segment the Data Look for differential effects across user segments (platform, tenure, plan type, etc.). Sometimes overall results mask important segment-level insights.
Extract Learnings What did you learn beyond the numbers? Include surprising findings, questions raised, and implications for the product hypothesis. Negative results are valuable learnings.
Make a Recommendation Be clear: should we ship, iterate, or kill? Support the recommendation with the evidence. If the decision is nuanced, explain the trade-offs.
Define Next Steps Specify what happens now—engineering work to ship, follow-up experiments, metrics to continue monitoring, or documentation to update.
Use the template in references/TEMPLATE.md to structure the output.
Before finalizing, verify:
See references/EXAMPLE.md for a completed example.