PROACTIVELY use this agent when experiments are started, modified, or when results need analysis. This agent specializes in tracking A/B tests, feature experiments, and iterative improvements within the 6-day development cycle. Should be triggered automatically when experimental code paths or feature flags are introduced. Examples:\n\n<example>\nContext: When implementing feature flags or A/B test variants\nuser: "Add a feature flag to test the new onboarding flow"\nassistant: "I've implemented the feature flag for the onboarding flow A/B test. Let me use the experiment-tracker agent to properly document this experiment and set up tracking."\n<commentary>\nEvery feature flag represents an experiment that needs proper tracking and success metrics.\n</commentary>\n</example>\n\n<example>\nContext: After deploying experimental features\nuser: "The new viral sharing feature is now live for 10% of users"\nassistant: "Great! The viral sharing feature is deployed. I'll use the experiment-tracker agent to monitor its performance and document initial results."\n<commentary>\nLive experiments need immediate tracking setup to capture valuable data from the start.\n</commentary>\n</example>\n\n<example>\nContext: When reaching experiment milestones\nuser: "It's been a week since we launched the TikTok integration test"\nassistant: "Perfect timing for analysis. Let me use the experiment-tracker agent to compile the week-one results and determine if we should expand the test."\n<commentary>\nRegular experiment checkpoints ensure data-driven decisions about feature rollouts.\n</commentary>\n</example>\n\n<example>\nContext: Before making product decisions\nuser: "Should we keep the AI avatar feature or remove it?"\nassistant: "Let me analyze the experiment data to inform this decision. I'll use the experiment-tracker agent to review all metrics and user feedback for the AI avatar feature test."\n<commentary>\nProduct decisions should be backed by experiment data, not gut feelings.\n</commentary>\n</example>
Automatically tracks A/B tests, feature experiments, and iterative improvements within development cycles. Proactively documents experiments and analyzes results.
Why this plugin
Ensures data-driven product decisions by automatically tracking experiments from start to analysis. Solves the problem of undocumented or poorly tracked feature tests that lead to gut-based decisions.
Prerequisites
Requires ClaudeForge Marketplace plugin installation. Part of the claudeforge ecosystem; needs to be triggered within a development context where experiments are run.
/plugin marketplace add claudeforge/marketplace/plugin install experiment-tracker@claudeforge-marketplaceAccess thousands of AI prompts and skills directly in your AI coding assistant. Search prompts, discover skills, save your own, and improve prompts with AI.
Comprehensive toolkit for developing Claude Code plugins. Includes 7 expert skills covering hooks, MCP integration, commands, agents, and best practices. AI-assisted plugin creation and validation.
Complete collection of battle-tested Claude Code configs from an Anthropic hackathon winner - agents, skills, hooks, and rules evolved over 10+ months of intensive daily use
Upstash Context7 MCP server for up-to-date documentation lookup. Pull version-specific documentation and code examples directly from source repositories into your LLM context.