From sentry-skills
Analyzes Sentry session replays from external users to surface UX patterns, pain points, and user journeys for product areas like issues, traces, or dashboards.
npx claudepluginhub joshuarweaver/cascade-code-devops-misc-1 --plugin getsentry-skillsThis skill uses the workspace's default tool permissions.
Analyze session replays from real external users of sentry.io to surface UX patterns, pain points, and representative journeys for a given product area. This uses Sentry's own dogfooding org.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Analyze session replays from real external users of sentry.io to surface UX patterns, pain points, and representative journeys for a given product area. This uses Sentry's own dogfooding org.
$ARGUMENTS is the product area to research (e.g., "issues", "traces", "dashboards", "replays", "monitors", "releases", "alerts").
If $ARGUMENTS is empty, ask the user which product area to research.
This skill requires the Sentry MCP server to be connected. The following tools are used:
search_events — Search for session replaysget_replay_details — Get detailed replay informationsearch_issues — Look up error issuesget_sentry_resource — Fetch issue details from URLsIf these tools are not available, ask the user to connect the Sentry MCP server before proceeding.
Read references/product-areas.md and find the URL patterns for the requested area.
If the product area is not listed, infer a URL pattern from the area name. Most Sentry product areas follow the pattern /<area-name>/ in the URL path. The reference file may not cover newer product areas — confirm your assumption with the user if unclear.
Search for replays from external (non-Sentry-employee) users. 25 replays is a good starting point — go deeper if the product area is complex, if early patterns are ambiguous, or if the user wants a more comprehensive picture.
Start with last 24 hours — extend to 48h or 7d if needed to reach your target count. Run multiple search_events calls if needed. Use limit: 50 per call.
If you can't find enough replays (fewer than 10 even at 7 days), tell the user what you found and ask them to help iterate — they may suggest broader URL patterns, a different time range, or a related product area to include.
Query construction:
Use natural language queries like:
replays from the last 24 hours where url contains "/<area-path>" excluding user emails ending in @sentry.io and @getsentry.com
Key filters:
-user.email:*@sentry.io -user.email:*@getsentry.comDo NOT pass a projectSlug filter — replays span the whole org.
Call get_replay_details for each replay found in Step 2. Run these calls in parallel batches for speed.
For each replay, capture:
session (randomly sampled from normal browsing) vs buffer (triggered by an event — error, manual flush, or specific user action like submitting feedback or going through checkout). Note this distinction in your analysis since buffer replays are biased toward error/action moments, not typical browsing.referrer=slack, notification_uuid, alert_rule_id in query params (Slack notification), email link patterns, or bare URLs (bookmark/direct nav)After collecting replay details, identify errors that appear in multiple replays or seem likely to affect the user experience. For each significant error:
Triage by frequency: If the same issue ID (e.g., JAVASCRIPT-33RM) appears in 3+ replays, it's worth investigating.
Check the issue in Sentry: Use search_issues to find the issue, or get_sentry_resource with the issue URL from the replay details. Understand:
Infer user-facing impact from behavioral signals: We cannot see the rendered page content through replay metadata — only by watching the replay in-browser. Instead, infer impact from what users did after the error:
Classify each error based on this evidence:
Always note the confidence level and recommend watching specific replays to confirm. Link directly to the replay URL for each classified error.
Include this classification in the Friction & Pain Points section. Don't report likely-silent errors as pain points — list them in a separate "Background Errors (likely silent)" subsection for completeness.
Look at the replays through these UX research lenses:
query=is%3Aunresolved+assigned%3Ame tells you the user is triaging their own assigned issues.)session (random sample) vs buffer (event-triggered)? Buffer replays show moments where something notable happened (error, feedback submission, checkout, etc.) — they're valuable for friction analysis but aren't representative of typical browsing. Call out this bias when drawing conclusions.Use the template in references/output-template.md. Be specific — cite individual replays as evidence for each pattern. Link to replay URLs so the reader can watch the replay themselves.
Privacy: Never include full user email addresses in the report. Use anonymized identifiers like "user from [company domain]" or "User A, B, C."