Amplitude analytics plugins for AI coding agents
npx claudepluginhub amplitude/mcp-marketplaceUse Amplitude as an expert analyst - instrument Amplitude, discover product opportunities, analyze charts, create dashboards, manage experiments, and understand users and accounts
Open-source plugins and skills for Amplitude MCP users. Turn your AI coding assistant into a product manager and analytics powerhouse.
Works with Claude, Claude Code, Cursor, and Codex.
| Plugin | Description |
|---|---|
| amplitude | Reusable analysis and instrumentation skills covering charts, dashboards, experiments, session replays, reliability, AI agent analytics, and analytics tracking workflows |
The amplitude plugin turns your AI assistant into an expert product analyst and instrumentation partner. Skills are organized into seven areas:
| Skill | What it does |
|---|---|
create-chart | Creates Amplitude charts from natural language descriptions |
create-dashboard | Builds dashboards from requirements, organizing charts into logical sections |
analyze-chart | Deep-dives a chart to explain trends, anomalies, and likely drivers |
analyze-dashboard | Reviews a dashboard end-to-end, surfacing key takeaways and areas of concern |
| Skill | What it does |
|---|---|
analyze-experiment | Designs A/B tests, monitors running experiments, and interprets results |
monitor-experiments | Triages all active and recently completed experiments by importance |
analyze-feedback | Synthesizes customer feedback into themes — feature requests, bugs, pain points, praise |
analyze-account-health | Summarizes B2B account health with usage patterns, risk signals, and expansion opportunities |
discover-opportunities | Finds product opportunities by cross-referencing analytics, experiments, replays, and feedback |
compare-user-journeys | Compares two user groups side-by-side to surface behavioral differences |
| Skill | What it does |
|---|---|
debug-replay | Turns bug reports into numbered reproduction steps by extracting the interaction timeline from Session Replay |
replay-ux-audit | Watches multiple session replays for a flow and synthesizes a ranked friction map |
diagnose-errors | Triages product issues across network failures, JS errors, and error clicks |
monitor-reliability | Proactive reliability report from auto-captured error data so issues surface before users complain |
| Skill | What it does |
|---|---|
analyze-ai-topics | Analyzes what users ask AI agents about and how well each topic is served |
investigate-ai-session | Deep-dives specific AI agent sessions or failure patterns for root-cause analysis |
monitor-ai-quality | Delivers a proactive health report on AI agents covering quality, cost, performance, and errors |