From Fullstory
Fullstory analytics workflow. Use when answering a question that requires measuring user behavior — counts, rates, trends, breakdowns, or cohort comparisons. Builds segments and metrics, computes results, then investigates sessions to explain what the numbers mean.
npx claudepluginhub fullstorydev/fullstory-skills --plugin fullstoryThis skill uses the workspace's default tool permissions.
Internalize these three concepts before choosing tools:
Compares behaviors of two user groups using Amplitude session replays and metrics to produce behavioral diffs for cohort analysis like converters vs drop-offs or power users vs casual.
Export Microsoft Clarity user behavior analytics including heatmaps, session metrics, and engagement data segmented by browser, device, country, source via Composio integration.
Discovers product opportunities by analyzing Amplitude analytics, experiments, session replays, and customer feedback. Synthesizes evidence into RICE-scored, actionable priorities.
Share bugs, ideas, or general feedback.
Internalize these three concepts before choosing tools:
Before calling any tool, determine what the user is asking for:
single_number metrictop_n metrictrend metriccomparisons skillget_sessions with metric_idbuild_segment then get_sessions with segment_idIf the intent is ambiguous, ask the user before proceeding. Getting the intent wrong wastes a build+compute cycle.
Users often don't know what metrics or segments already exist in their Fullstory account. Always search first, even when the question sounds ad-hoc. Use get_metric(regex="...") or get_segment(regex="..."), starting broad and narrowing if needed (e.g., "how many rage clicks on checkout?" → start with checkout, then try checkout.*rage if the first search returns too many results).
Results include a short description of the segment's filters and events, so use that — not just the name — to judge relevance. If no results match, tell the user nothing was found and confirm before building. If results come back but their filters/events don't match the question, tell the user what you found and that none seem to match, then confirm they'd like you to build a new one.
If 2 or more plausible candidates come back, immediately call get_view_count on their IDs (up to 10) to rank by popularity. If search returns more than 10 candidates, pass the 10 most name-similar IDs. Then:
Metrics: Before building, make sure the unit of measurement is correct — getting this wrong is the most common source of misleading results. If the question is about "customers", "accounts", or "organizations", clarify whether the user wants to count individual users or group users by a customer/account/organization property. If it's the latter, look for user properties that match and build the metric to count by that property. Similarly, watch for ambiguity between pages and URLs — "which pages" usually means page titles or paths, not full URLs with query parameters.
Call build_metric with a descriptive query and the correct output_type derived from intent classification:
single_numbertop_ntrendFor top_n, make sure the grouping dimension is expressed in the query (e.g., "top pages by rage click count"). The metric builder will not invent a dimension on its own.
Segments: Call build_segment. Always reference by segment_id in subsequent steps. If the same cohort is needed for multiple questions in the conversation, reuse the existing segment_id — do not rebuild.
If the user wants to modify a metric or segment already established in this conversation — adding or removing a filter, changing aggregation, adjusting the time range, or changing output shape — use update_metric or update_segment. Pass the existing metric_definition or segment_definition and a natural language refinement.
update_metric: supports filter changes, aggregation changes, and output type overrides (via output_type). Does not support ratio metrics — rebuild those with build_metric.update_segment: supports filter additions/removals and time range changes.Call compute_metric with:
metric_definition from build or getsegment_id if the question is scoped to a cohorttime_range — default is last_30_days; ask the user if they want a different windowPresent results in plain language with context:
Always surface metric_url so the user can verify in the Fullstory UI.
Load these when the situation calls for it:
references/validation.md — when results are zero, anomalous, or the user expresses skepticismreferences/sessions.md — when investigating sessions to understand why a metric looks the way it does| Tool | Claude Code name |
|---|---|
get_metric | mcp__fullstory-mcp__get_metric |
get_segment | mcp__fullstory-mcp__get_segment |
get_view_count | mcp__fullstory-mcp__get_view_count |
build_metric | mcp__fullstory-mcp__build_metric |
build_segment | mcp__fullstory-mcp__build_segment |
compute_metric | mcp__fullstory-mcp__compute_metric |
update_metric | mcp__fullstory-mcp__update_metric |
update_segment | mcp__fullstory-mcp__update_segment |
get_sessions | mcp__fullstory-mcp__get_sessions |
get_session_events | mcp__fullstory-mcp__get_session_events |
time_range is last_30_days. Ask before using a different window unless the user specified one.segment_id.segment_id and metric_id within a conversation. Do not rebuild objects the user has already established.update_metric with the existing metric_definition and the desired output_type. Only fall back to build_metric for fundamentally different queries or ratio metrics.metric_url in your response so the user can verify in the Fullstory UI.