From posthog
Guides creation of A/B test experiments via 3-step flow: hypothesis and feature flag definition, rollout configuration, analytics setup. Triggers on new experiment or A/B test requests, or before experiment-create calls.
npx claudepluginhub anthropics/claude-plugins-official --plugin posthogThis skill uses the workspace's default tool permissions.
This skill walks through the 3-step flow for creating a new A/B test experiment.
Designs A/B experiment plans with hypothesis, primary/secondary/guardrail metrics, audience allocation, holdout strategy, duration estimates, and risks. Use for feature test planning.
Guides A/B experiment design, statistical analysis, feature flagging, hypothesis testing, sample sizing, and result interpretation for validated product learning.
Guides A/B test setup with hard gates for hypothesis validation, metrics definition, sample size calculation, assumptions checks, and execution readiness. Use before coding experiments.
Share bugs, ideas, or general feedback.
This skill walks through the 3-step flow for creating a new A/B test experiment.
Create the experiment as a draft quickly, then iterate on metrics and configuration. The user gets a tangible draft immediately and can refine it.
Gather these before calling experiment-create:
description)"product". The "web" value is reserved for no-code experiments configured visually with the PostHog
toolbar in a browser; it cannot be meaningfully driven via MCP. If a user asks for a
no-code/toolbar experiment, point them to the PostHog UI instead of creating one here.)If the user gives enough context to infer these, don't ask — just proceed.
This is about rollout configuration.
Before asking any rollout question, load configuring-experiment-rollout. The disambiguation wording, recommendations, and post-answer branches live there — do not formulate rollout questions yourself, and do not assume an example you remember covers the user's path.
Key decision points (covered in detail by configuring-experiment-rollout):
If the user doesn't mention rollout specifics, use defaults: 50/50 control/test, 100% rollout.
This is about analytics and metrics. Load the configuring-experiment-analytics skill for guidance.
Do NOT configure metrics on creation. Metrics are not passed to experiment-create — they are added
afterwards via experiment-update. This keeps the creation call lightweight.
When the user specifies metrics upfront, acknowledge them and add them immediately after creation. When they don't, create the draft and then guide them through metric setup as a follow-up.
Call experiment-create with:
{
"name": "Descriptive experiment name",
"feature_flag_key": "kebab-case-key",
"description": "Hypothesis: [what you expect to happen]",
"parameters": {
"feature_flag_variants": [
{ "key": "control", "name": "Control", "split_percent": 50 },
{ "key": "test", "name": "Test", "split_percent": 50 }
],
"rollout_percentage": 100
}
}
Two different percentages — do NOT mix them up:
feature_flag_variants[].split_percent — how users inside the experiment are split across variants (must sum to 100, recommended to have an even split).parameters.rollout_percentage — what fraction of all users enter the experiment at all (0-100, defaults to 100).Key details:
"control". Minimum 2, maximum 20 variants.rollout_percentage defaults to 100 if omitted.stats_config if the user requests Frequentist.Always show the experiment URL. The experiment-create response includes _posthogUrl — always display this link so the user can view and configure the experiment in the UI.
Remind the user to implement the feature flag in code. Link to the experiment page and say "implement the flag as shown here" — the experiment detail page shows implementation snippets for the user's SDK.
Guide through metrics if not yet configured — load the configuring-experiment-analytics skill.
Launch when ready — use the experiment-launch tool.