From amplitude
Retrieves, synthesizes, and prioritizes recent AI agent results from Amplitude across all agent types, validates freshness using deployments, and ranks by impact into a unified narrative.
npx claudepluginhub amplitude/mcp-marketplace --plugin amplitudeThis skill uses the workspace's default tool permissions.
Surface everything Amplitude's AI agents have found recently. Query **every available agent type** in `get_agent_results`, validate for staleness, and synthesize into a unified narrative ranked by impact with concrete follow-up actions.
Guides Payload CMS config (payload.config.ts), collections, fields, hooks, access control, APIs. Debugs validation errors, security, relationships, queries, transactions, hook behavior.
Builds scalable data pipelines, modern data warehouses, and real-time streaming architectures using Spark, dbt, Airflow, Kafka, and cloud platforms like Snowflake, BigQuery.
Builds production Apache Airflow DAGs with best practices for operators, sensors, testing, and deployment. For data pipelines, workflow orchestration, and batch job scheduling.
Surface everything Amplitude's AI agents have found recently. Query every available agent type in get_agent_results, validate for staleness, and synthesize into a unified narrative ranked by impact with concrete follow-up actions.
Primary tool:
Amplitude:get_agent_results — Retrieve pre-computed analyses from Amplitude's AI agents. Supports multiple agent types (check the tool's agent_type enum for the current list). Each agent type is queried separately. All support filtering by created_after, created_before, query, agent_params, and limit.Supporting tools:
Amplitude:get_context / Amplitude:get_project_context — Bootstrap user, org, and project info.Amplitude:get_deployments — Check whether fixes have shipped for flagged issues (staleness validation).Amplitude:get_context to get the user's org, projects, recent activity, and key dashboards. If multiple projects, ask which to review — or review all if the user wants a broad scan.Amplitude:get_project_context for the target project's settings and AI context.Determine the review window from the user's request:
created_after ISO 8601 timestamp for the review window.Check the get_agent_results tool descriptor to discover every available agent_type in the enum. Make one call per agent type, in parallel. For each:
agent_type: the agent type from the enumcreated_after: the review window timestamp from Step 1limit: 10If the user asked about a specific area (e.g., "onboarding insights"), add a query matching that area to every call. If an agent type supports additional filtering via agent_params (e.g., impact ratings, categories, dashboard IDs), use them to focus results when the user's request suggests a narrower scope — otherwise omit agent_params to get the broadest view.
For each result returned, note:
If exactly 1 result is returned for an agent type, artifacts auto-expand. If multiple results, note the previews and fetch full artifacts for the 2-3 most relevant ones (by recency or by matching the user's focus area) using session_id.
If any agent type returns fewer than 3 results and supports agent_params filtering, consider a second call with relaxed filters to broaden coverage.
Agent insights go stale within days. Before synthesizing, filter out or flag anything unreliable.
Check creation dates. For each finding, note how old it is relative to today:
Cross-reference with deployments (1 call). Call get_deployments once. For each AI-detected issue, check if a deployment shipped a fix or change to the affected area after the analysis was run. If so, note the finding as "potentially resolved by [deployment]" rather than presenting it as an active issue.
Deduplicate across agent types. The same problem may surface from multiple agent types. Merge these into a single finding with multi-agent evidence — don't present the same issue multiple times.
Rank by impact and evidence strength.
Group by theme, not by agent type. Organize findings by product theme or problem area ("Checkout flow," "Onboarding," "Search feature"), not by which agent produced them. Within each theme, weave together evidence from all contributing agent types.
Identify gaps. Note agent types that returned no recent results, or product areas with no coverage.
Structure the output as a narrative digest that a PM could forward to their team.
Required sections:
Summary (3-4 sentences): Which agent types were queried, the review window, how many results total, the single most important finding, and overall assessment.
Key Findings (3-7 items, ranked by impact):
For each finding:
### [Finding Title — action-oriented, ≤10 words]
**Impact:** [Critical/High/Medium/Low] | **Agents:** [list agent types that contributed] | **Freshness:** [X days old]
**What the AI found:** Describe the insight — what anomaly, friction, or issue was
detected. Be specific about the product area and the evidence from each agent.
**Staleness check:** Note if deployments shipped after the analysis, or if the finding
needs fresh validation. Omit this line if the finding is < 3 days old.
**Recommended action:** One concrete next step.
Coverage Gaps (2-4 items): Agent types with no results, or product areas with no AI coverage. For each, suggest what to do — which agent to run and on what.
Follow-on prompt: End with 2-3 specific options for what to dig into next, framed around the findings.
Writing standards:
query parameter to every agent type call. Present only relevant findings.User says: "What has the AI found recently?"
Actions:
get_agent_results for all available agent types, query each in parallel with created_after set to 7 days agoUser says: "Any AI insights about onboarding?"
Actions:
query: "onboarding", created_after set to 7 days agoUser says: "Show me all AI agent insights"
Actions: