A/B tests content variations by creating named tests, logging variants with quality scores across prompt approaches, headlines, or strategies, and recommending winners.
From digital-marketing-pronpx claudepluginhub indranilbanerjee/digital-marketing-pro --plugin digital-marketing-proThis skill uses the workspace's default tool permissions.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
A/B test content output variations by comparing quality scores across different prompt approaches, headline styles, CTA phrasing, or complete content strategy variations. Create named tests, log variants with their evaluation scores, and determine which approach produces the best quality results.
This command brings experimental rigor to content creation. Instead of guessing which headline style, subject line approach, or content structure works best, you run a structured test: define the experiment, log each variant with its quality scores, and get a statistically grounded recommendation on which approach to adopt. Useful for testing subject line styles (curiosity vs. benefit-driven), headline approaches (question vs. statement vs. how-to), CTA phrasing (urgency vs. value vs. social proof), tone variations (formal vs. conversational), or complete content strategy A/B comparisons.
The user must provide (or will be prompted for):
create (set up a new test), log (add a variant to an existing test), results (get comparison and winner), or list (show all tests)create, log, and resultslogloglog~/.claude-marketing/brands/_active-brand.json for the active slug, then load ~/.claude-marketing/brands/{slug}/profile.json. Apply brand voice, compliance rules for target markets (skills/context-engine/compliance-rules.md), and industry context. Check for guidelines at ~/.claude-marketing/brands/{slug}/guidelines/_manifest.json — if present, load restrictions and relevant category files (voice-and-tone rules, messaging hierarchy, channel style guides). Check for custom templates at ~/.claude-marketing/brands/{slug}/templates/. Check for agency SOPs at ~/.claude-marketing/sops/. If no brand exists, ask: "Set up a brand first (/dm:brand-setup)?" — or proceed with defaults.create action: Set up a new test by running python scripts/prompt-ab-tester.py --brand {slug} --action create-test --test-name "{name}". This initializes the test record with metadata (creation date, brand, content type) and prepares it for variant logging. Confirm the test was created and remind the user to log variants with /dm:prompt-test using the log action.log action: First evaluate the variant content for quality by running python scripts/eval-runner.py --brand {slug} --action run-quick --text "{content_or_path}" --content-type "{type}" (pass --evidence "{evidence_path}" if provided). This produces per-dimension scores (clarity, persuasion, brand alignment, readability, compliance, etc.) and a composite score. Then log the variant with its scores by running python scripts/prompt-ab-tester.py --brand {slug} --action log-variant --test-name "{name}" --variant-label "{label}" --variant-description "{description}" --scores "{scores_json}". Present the individual variant scores to the user immediately so they can see how this variant performed before logging additional variants.results action: Pull the full comparison by running python scripts/prompt-ab-tester.py --brand {slug} --action get-results --test-name "{name}". Analyze the results:
list action: Run python scripts/prompt-ab-tester.py --brand {slug} --action list-tests to show all tests for this brand, their status (in-progress, completed), variant count, and creation date.A structured test report containing: