npx claudepluginhub amplitude/mcp-marketplace --plugin amplitudeThis skill uses the workspace's default tool permissions.
Watch 5-10 session replays for a specific feature, page, or flow, then synthesize patterns into a ranked friction map. This skill turns hours of manual replay watching into a structured UX report grounded in real user behavior.
Guides Payload CMS config (payload.config.ts), collections, fields, hooks, access control, APIs. Debugs validation errors, security, relationships, queries, transactions, hook behavior.
Builds scalable data pipelines, modern data warehouses, and real-time streaming architectures using Spark, dbt, Airflow, Kafka, and cloud platforms like Snowflake, BigQuery.
Builds production Apache Airflow DAGs with best practices for operators, sensors, testing, and deployment. For data pipelines, workflow orchestration, and batch job scheduling.
Watch 5-10 session replays for a specific feature, page, or flow, then synthesize patterns into a ranked friction map. This skill turns hours of manual replay watching into a structured UX report grounded in real user behavior.
Primary tools:
Amplitude:get_session_replays — Find sessions matching event filters, user properties, or time windows. Use this to target sessions for a specific feature or flow.Amplitude:get_session_replay_events — Decode a replay into an interaction timeline: navigations, clicks, inputs, scrolls. This is what you "watch."Supporting tools:
Amplitude:get_events — Discover valid event names. Never guess event names.Amplitude:get_event_properties — Discover properties for filtering (page path, feature area, etc.).Amplitude:query_chart — Pull quantitative context (funnel conversion rates, feature adoption) to anchor the qualitative replay findings.Amplitude:get_feedback_insights / Amplitude:get_feedback_mentions — Cross-reference replay friction with customer feedback themes.Determine what to audit from the user's request:
Also determine:
Amplitude:get_context. If multiple projects, ask which to audit.Amplitude:get_events to find events related to the target area. Look for:
Before watching replays, establish context with 1-2 chart queries. Budget: 2 calls max.
Amplitude:query_chart to get the current conversion rate and identify the worst drop-off step. This tells you where to focus your replay attention.This quantitative baseline makes your qualitative findings more actionable — "40% of users drop off at step 3, and here's what we see them doing" is stronger than "users seem confused at step 3."
Use Amplitude:get_session_replays to find 8-12 sessions (request limit: 12 to allow for some sessions with missing replay data).
Filter strategy by audit type:
If the user specified a segment (plan type, platform, etc.), add user property filters.
For each session, call Amplitude:get_session_replay_events with event_limit: 300.
Budget: 5-8 sessions. Skip sessions that return empty or minimal data.
While analyzing each session, track these friction signals:
| Signal | What to look for in the timeline |
|---|---|
| Rage clicks | 3+ clicks on the same coordinates within a short time span |
| Hesitation | Long pauses (>10 seconds) between navigation and first interaction on a page |
| Back-and-forth | Navigating to a page, then back, then forward again |
| Abandoned inputs | Starting to type in a field, then navigating away without submitting |
| Excessive scrolling | Large scroll deltas suggesting the user is searching for something |
| Dead-end navigation | Visiting a page and immediately leaving (bounce within seconds) |
| Repeat attempts | Performing the same action multiple times (re-submitting a form, re-clicking a button) |
For each session, write a brief summary:
This is the core analytical step. Aggregate findings across all watched sessions.
| Severity | Criteria |
|---|---|
| Critical | Blocks task completion. User gives up or encounters an error. Seen in 50%+ of sessions. |
| High | Causes significant confusion or delay. User eventually succeeds but with visible struggle. Seen in 30%+ of sessions. |
| Medium | Causes minor hesitation or suboptimal paths. User recovers quickly. Seen in 20%+ of sessions. |
| Low | Cosmetic or minor annoyance. Seen in <20% of sessions or only in edge cases. |
Identify root cause hypotheses. For each friction pattern, hypothesize why it happens:
Cross-reference with feedback (if available). Call Amplitude:get_feedback_insights with keywords from your friction findings. If users are complaining about the same thing you're seeing in replays, that's high-confidence signal.
Structure the output as a friction map that a PM or designer can act on.
Required sections:
Audit Summary (3-4 sentences): What was audited, how many sessions were watched, the single biggest finding, and overall UX health assessment. Written as a narrative you could paste into a design review doc.
Scope & Methodology:
Friction Map — Ranked by severity, then frequency:
For each friction point:
### [Friction Point Title — action-oriented, ≤10 words]
**Severity:** [Critical/High/Medium/Low] | **Frequency:** Seen in X of Y sessions
**What happens:** Describe the user behavior observed — what they do, where they
hesitate, what goes wrong. Be specific about the page and interaction.
**Likely cause:** Your hypothesis for why this friction exists.
**Evidence:**
- Session replay links showing this pattern
- Quantitative data (if available): conversion rate at this step, error rate, etc.
- Customer feedback quotes (if found)
**Suggested fix:** One concrete, actionable recommendation.
Positive Patterns (1-2 items): What's working well. Which parts of the experience were smooth across sessions. This provides balance and highlights what to preserve.
Recommended Next Steps (3-5 numbered items): Start each with a verb. Prioritize by impact. Examples:
User says: "Audit the onboarding experience for new users"
Actions:
User says: "What's the UX like on our pricing page?"
Actions:
User says: "Are enterprise users having trouble with the report builder?"
Actions: