From agentkits-marketing
Interactively plans A/B tests for UI elements by collecting scope, type, goal, and traffic via questions, confirms, then generates hypothesis, variants, metrics, and design.
npx claudepluginhub aitytech/agentkits-marketing --plugin agentkits-marketingtest/## Prerequisites Before running this command, ensure you have: - [ ] Element or hypothesis to test identified - [ ] Current baseline metrics known - [ ] Traffic level understood ## Context Loading Load these files first: 1. `./README.md` - Product context 2. `./docs/tests/` - Previous test results 3. `.claude/skills/ab-test-setup/SKILL.md` - Testing frameworks --- ## Language & Quality Standards **CRITICAL**: Respond in the same language the user is using. If Vietnamese, respond in Vietnamese. If Spanish, respond in Spanish. **Standards**: Token efficiency, sacrifice grammar for conc...
/experimentDesigns an A/B experiment from a hypothesis or design change, producing a document with structured hypothesis, variants, metrics, sample size, duration, user flows, and analysis plan.
/experiment-designDesigns an A/B test or experiment using a structured template from the measure-experiment-design skill, based on user context.
/experimentDesigns and implements hypothesis-driven A/B experiments with power analysis, platform integration (Statsig, Optimizely, etc.), and statistical methods. Generates TS configs, assignment logic, exposure logging, docs, commits. Supports analysis, audit flags.
/launch-experimentConverts approved hypothesis into fully-instrumented A/B experiment: assembles variants, configures rollout ramps/guardrails, runs QA/approvals, launches with monitoring and stakeholder notifications.
/analyze-testAnalyzes A/B test results from data, screenshots, or descriptions: validates design and sample size, computes statistical significance and effect size, recommends launch/extend/stop with detailed report.
/launch-in-app-experimentsCoordinates in-app experiments targeting activation, engagement, monetization, or retention. Produces experiment brief, guardrail dashboard, and readout template.
Share bugs, ideas, or general feedback.
Before running this command, ensure you have:
Load these files first:
./README.md - Product context./docs/tests/ - Previous test results.claude/skills/ab-test-setup/SKILL.md - Testing frameworksCRITICAL: Respond in the same language the user is using. If Vietnamese, respond in Vietnamese. If Spanish, respond in Spanish.
Standards: Token efficiency, sacrifice grammar for concision, list unresolved questions at end.
Skills: Activate ab-test-setup, marketing-psychology, analytics-attribution skills.
Components: Reference ./.claude/components/interactive-questions.md
Question: "What level of A/B test planning do you need?" Header: "Scope" MultiSelect: false
Options:
Question: "What type of element are you testing?" Header: "Type" MultiSelect: false
Options:
Question: "What's your primary goal?" Header: "Goal" MultiSelect: false
Options:
Question: "What's your current traffic level?" Header: "Traffic" MultiSelect: false
Options:
Display summary:
## A/B Test Configuration
| Parameter | Value |
|-----------|-------|
| Test Element | [selected type] |
| Goal | [selected goal] |
| Traffic Level | [selected traffic] |
| Scope | [Quick/Standard/Complete] |
Question: "Create this A/B test plan?" Header: "Confirm" MultiSelect: false
Options:
Hypothesis Formation
Test Design
Implementation Plan
Analysis Framework
| Task | Agent | Trigger |
|---|---|---|
| Test design | conversion-optimizer | Primary task |
| Psychology review | brainstormer | Behavioral insights |
| Technical setup | researcher | Implementation guidance |
| Analytics setup | mcp-manager | Tracking configuration |
Format: "If we [change], then [metric] will [direction] because [reason]"
Example: "If we change the CTA from 'Sign Up' to 'Start Free Trial', then signup rate will increase by 15% because it reduces perceived commitment."
| Baseline Rate | 10% Lift | 20% Lift | 30% Lift |
|---|---|---|---|
| 2% | 19,000 | 4,800 | 2,200 |
| 5% | 7,700 | 1,900 | 900 |
| 10% | 3,900 | 1,000 | 450 |
| 20% | 2,000 | 500 | 230 |
Per variation, 95% significance, 80% power
## A/B Test: [Element]
**Hypothesis**: [If-then-because statement]
| Element | Value |
|---------|-------|
| Control (A) | [Current version] |
| Variant (B) | [New version] |
| Metric | [Primary metric] |
| Duration | [Estimated days] |
## A/B Test Plan: [Element]
**Hypothesis**: [Detailed if-then-because statement]
### Test Design
| Element | Value |
|---------|-------|
| Control (A) | [Description] |
| Variant (B) | [Description] |
| Primary Metric | [Metric name] |
| Secondary Metrics | [List] |
| Traffic Split | 50/50 |
| Sample Size | [Per variation] |
| Duration | [Estimated days] |
### Success Criteria
- Statistical significance: 95%
- Minimum detectable effect: [X%]
### Implementation Checklist
- [ ] Tracking setup
- [ ] QA verification
- [ ] Launch approval
[Include Standard + Full hypothesis analysis + Detailed sample size calculation + Segmentation plan + Analysis framework + Win/loss playbooks]
Before delivering A/B test plan:
Save test plan to: ./docs/tests/ab-test-[element]-[YYYY-MM-DD].md
After A/B test setup, consider:
/cro:page - Optimize test element/analytics:funnel - Analyze baseline funnel/checklist:ab-testing - Full testing framework