Ask natural-language business questions about connected product datasets (CSV, DuckDB, Postgres, BigQuery, Snowflake) to run AI-orchestrated pipelines delivering validated insights, SWD-compliant charts, stakeholder narratives, and Marp slide decks with forecasts, experiments, and root-cause analysis.
npx claudepluginhub ai-analyst-lab/ai-analyst-plugin --plugin ai-analystEvery agent `.md` file MUST begin with a CONTRACT block -- a YAML declaration
| Variable | Value | Used in |
Generate a single styled chart from data and a chart specification, applying SWD visualization standards for theme, color, typography, and annotation. Context: Invoked as part of the analytical pipeline when chart-maker is applicable. user: "[Request analysis involving chart-maker]" assistant: "I'll use the chart-maker agent to [perform specific analysis]." commentary: This agent is appropriate when [context for usage].
Perform cohort analysis -- retention curves, cohort comparison, vintage analysis, and cohort LTV -- to reveal how user behavior evolves over time. Context: Invoked as part of the analytical pipeline when cohort-analysis is applicable. user: "[Request analysis involving cohort-analysis]" assistant: "I'll use the cohort-analysis agent to [perform specific analysis]." commentary: This agent is appropriate when [context for usage].
Draft stakeholder communications from completed analysis results, adapting format and tone to user preferences and audience. Context: Invoked as part of the analytical pipeline when comms-drafter is applicable. user: "[Request analysis involving comms-drafter]" assistant: "I'll use the comms-drafter agent to [perform specific analysis]." commentary: This agent is appropriate when [context for usage].
Discover what data exists in a source, profile its quality and completeness, identify tracking gaps, and recommend supported analyses. Context: Invoked as part of the analytical pipeline when data-explorer is applicable. user: "[Request analysis involving data-explorer]" assistant: "I'll use the data-explorer agent to [perform specific analysis]." commentary: This agent is appropriate when [context for usage].
Create a complete slide deck from analysis outputs by combining a storytelling narrative with charts, applying a presentation theme, and generating speaker notes. Context: Invoked as part of the analytical pipeline when deck-creator is applicable. user: "[Request analysis involving deck-creator]" assistant: "I'll use the deck-creator agent to [perform specific analysis]." commentary: This agent is appropriate when [context for usage].
Perform drivers analysis, segmentation, and funnel analysis on a dataset to identify what is happening, why, and which factors matter most. Context: Invoked as part of the analytical pipeline when descriptive-analytics is applicable. user: "[Request analysis involving descriptive-analytics]" assistant: "I'll use the descriptive-analytics agent to [perform specific analysis]." commentary: This agent is appropriate when [context for usage].
Design experiments or quasi-experimental analyses to test causal hypotheses, including power estimation, guardrail selection, and pre-registered decision rules. Context: Invoked as part of the analytical pipeline when experiment-designer is applicable. user: "[Request analysis involving experiment-designer]" assistant: "I'll use the experiment-designer agent to [perform specific analysis]." commentary: This agent is appropriate when [context for usage].
Turn analytical questions into testable hypotheses with expected outcomes, confirming/rejecting criteria, and structured test plans. Context: Invoked as part of the analytical pipeline when hypothesis is applicable. user: "[Request analysis involving hypothesis]" assistant: "I'll use the hypothesis agent to [perform specific analysis]." commentary: This agent is appropriate when [context for usage].
Review the storyboard as a narrative sequence before charting, ensuring coherent story flow, progressive depth, and no story gaps. Context: Invoked as part of the analytical pipeline when narrative-coherence-reviewer is applicable. user: "[Request analysis involving narrative-coherence-reviewer]" assistant: "I'll use the narrative-coherence-reviewer agent to [perform specific analysis]." commentary: This agent is appropriate when [context for usage].
Quantify the business value of an opportunity or fix with sensitivity analysis that identifies which assumptions matter most. Context: Invoked as part of the analytical pipeline when opportunity-sizer is applicable. user: "[Request analysis involving opportunity-sizer]" assistant: "I'll use the opportunity-sizer agent to [perform specific analysis]." commentary: This agent is appropriate when [context for usage].
Perform time-series analysis to identify trends, detect anomalies, decompose seasonality, and produce annotated timeline charts. Context: Invoked as part of the analytical pipeline when overtime-trend is applicable. user: "[Request analysis involving overtime-trend]" assistant: "I'll use the overtime-trend agent to [perform specific analysis]." commentary: This agent is appropriate when [context for usage].
Generate well-structured, prioritized analytical questions from a business problem description, producing a structured question brief with hypotheses and data requirements for the top candidates. Context: Invoked at the start of an analysis pipeline when a business problem has been articulated but analytical direction is unclear. user: "We're seeing lower retention in a new user cohort. What should we investigate?" assistant: "I'll use the question-framing agent to break down your business problem into structured, prioritized analytical questions and identify which ones your data can support." commentary: This agent is appropriate when you have a business challenge but need to narrow the scope and identify the highest-impact questions to investigate. It surfaces data gaps before analysis begins, saving time on impossible questions.
Iteratively drill down through dimensions to find the specific, actionable root cause of a metric change. Context: Invoked as part of the analytical pipeline when root-cause-investigator is applicable. user: "[Request analysis involving root-cause-investigator]" assistant: "I'll use the root-cause-investigator agent to [perform specific analysis]." commentary: This agent is appropriate when [context for usage].
Verify data loading integrity by comparing pandas direct-read vs DuckDB SQL on foundational metrics. HALT on mismatch. Context: Invoked as part of the analytical pipeline when source-tieout is applicable. user: "[Request analysis involving source-tieout]" assistant: "I'll use the source-tieout agent to [perform specific analysis]." commentary: This agent is appropriate when [context for usage].
Design a storyboard before any charting -- story beats following Context-Tension-Resolution arc, then map each beat to a visual format. Context: Invoked as part of the analytical pipeline when story-architect is applicable. user: "[Request analysis involving story-architect]" assistant: "I'll use the story-architect agent to [perform specific analysis]." commentary: This agent is appropriate when [context for usage].
Turn raw analysis outputs into a stakeholder-ready narrative that connects findings back to the original business question and drives a specific decision. Context: Invoked as part of the analytical pipeline when storytelling is applicable. user: "[Request analysis involving storytelling]" assistant: "I'll use the storytelling agent to [perform specific analysis]." commentary: This agent is appropriate when [context for usage].
Independently verify analytical findings by re-deriving key numbers, checking arithmetic, cross-referencing data sources, and flagging common statistical errors. Context: Invoked as part of the analytical pipeline when validation is applicable. user: "[Request analysis involving validation]" assistant: "I'll use the validation agent to [perform specific analysis]." commentary: This agent is appropriate when [context for usage].
Review generated chart images against the SWD checklist and advanced technique standards, producing specific fix reports with actionable code-level fixes. Context: Invoked as part of the analytical pipeline when visual-design-critic is applicable. user: "[Request analysis involving visual-design-critic]" assistant: "I'll use the visual-design-critic agent to [perform specific analysis]." commentary: This agent is appropriate when [context for usage].
EMBEDDED — Establishes a clear analysis plan before writing queries. This functionality is built into the ask-question and run-analysis skill. Do NOT invoke analysis-design-spec separately — use ask-question and run-analysis instead, which includes these instructions inline.
EMBEDDED — Retrieves proven SQL patterns and reusable CTEs. This functionality is built into the ask-question and run-analysis skill. Do NOT invoke archaeology separately — use ask-question and run-analysis instead, which includes these instructions inline.
Run the multi-persona planning methodology to produce a master plan for a new project or feature. Triggered when users say "architect", "plan this", "design the system", or invoke `/architect`.
EMBEDDED — Saves completed analysis to the knowledge archive. This functionality is built into the run-analysis skill. Do NOT invoke archive-analysis separately — use run-analysis instead, which includes these instructions inline.
USE THIS SKILL for ANY data question, analytical request, or metric inquiry. This is the MANDATORY entry point whenever a user asks about data, metrics, trends, churn, revenue, conversion, retention, segments, cohorts, funnels, KPIs, or any quantitative question — even casual ones like 'how are we doing' or 'what happened last month.' Also use when user says 'analyze', 'compare', 'why did X change', 'show me', 'what's driving', 'break down', or asks for any chart or visualization. If the user has a connected dataset and asks ANYTHING about their data, use this skill. Do NOT attempt to answer data questions without this skill — it contains critical charting standards, validation steps, and knowledge loading that produce professional-quality outputs.
Interactive browser for your organization's knowledge system. Explore terms, products, metrics, objectives, and team structure. Also crawl Notion workspaces to extract and populate business context. Triggered when users say "/business", "browse business context", "/notion-ingest", or "crawl notion workspace".
EMBEDDED — Ensures recommendations end with a clear follow-up plan. This functionality is built into the run-analysis skill. Do NOT invoke close-the-loop separately — use run-analysis instead, which includes these instructions inline.
Compare metrics, findings, and patterns across two or more connected datasets. Triggered when users say "compare across datasets", "cross-dataset patterns", or invoke `/compare-datasets`.
Triggers on "connect", "add data source", "link database", "/connect-data", "/datasets".
EMBEDDED — Validates data completeness and consistency before analysis. This functionality is built into the ask-question and run-analysis skill. Do NOT invoke data-quality-check separately — use ask-question and run-analysis instead, which includes these instructions inline.
Deep-profile the active dataset to understand schema structure, value distributions, temporal patterns, correlations, completeness gaps, and anomalies. Triggered when users say "profile the data", "deep dive into dataset", or invoke `/deep-profile`.
Define any metric clearly and completely using a standardized template so there is no ambiguity about what is being measured, how it's calculated, or how to interpret it. Triggered when users say "define a metric", "specify metric", or invoke `/define-metric`.
Design a controlled experiment (A/B test, multivariate test, or quasi-experiment) with clear hypothesis, success metrics, sample size, and statistical power. Triggered when users say "design experiment", "A/B test design", "how should we test this", or invoke `/design-experiment`.
USE THIS SKILL when the user wants to explore, browse, preview, or understand their data before asking a specific question. Triggers on 'explore', 'browse data', 'what's in this dataset', 'show me the schema', 'what tables do I have', '/explore', '/data', or any request to look at the data structure, preview rows, check distributions, or understand what's available. Also use when a user just connected a new dataset and wants to see what's there. Do NOT skip this skill for data exploration — it includes quality checks and SWD chart standards that produce professional outputs.
Triggers on "export", "share", "send to Slack", "make a deck", "/export".
EMBEDDED — Detects corrections and learnings from user messages. This functionality is built into the ask-question skill. Do NOT invoke feedback-capture separately — use ask-question instead, which includes these instructions inline.
EMBEDDED — Provides adaptive welcome experience for new users. This functionality is built into the setup skill. Do NOT invoke first-run-welcome separately — use setup instead, which includes these instructions inline.
Generate time-series forecasts for key metrics using statistical methods. Supports naive baselines, seasonality detection, and exponential smoothing. Triggered when users ask "what will revenue look like next month?", "forecast DAU", or invoke `/forecast`.
EMBEDDED — Pairs success metrics with guardrail metrics to check for trade-offs. This functionality is built into the ask-question and run-analysis skill. Do NOT invoke guardrails separately — use ask-question and run-analysis instead, which includes these instructions inline.
Check if npm is available and install Marp CLI globally for presentation generation. Triggered when users say "install marp", "set up presentations", "enable slide generation", or invoke `/install-marp`.
EMBEDDED — Loads knowledge subsystems and dataset context at session start. This functionality is built into the ask-question skill. Do NOT invoke knowledge-bootstrap separately — use ask-question instead, which includes these instructions inline.
Record analyst mistakes and their fixes so future analyses learn from past errors. Manual counterpart to automatic feedback capture. Triggered when users say "log a correction", "that was wrong because", or invoke `/log-correction`.
Browse, inspect, compare, and clean up past pipeline runs. Each run is a self-contained directory with its own working files, outputs, and pipeline state. Triggered when users say "/runs", "list runs", "compare runs", or invoke `/manage-runs`.
EMBEDDED — Discovers recurring patterns across analyses. This functionality is built into the run-analysis skill. Do NOT invoke patterns separately — use run-analysis instead, which includes these instructions inline.
EMBEDDED — Applies consistent theme standards to presentation decks. This functionality is built into the run-analysis skill. Do NOT invoke presentation-themes separately — use run-analysis instead, which includes these instructions inline.
EMBEDDED — Structures analytical questions with decision context and success criteria. This functionality is built into the ask-question and run-analysis skill. Do NOT invoke question-framing separately — use ask-question and run-analysis instead, which includes these instructions inline.
EMBEDDED — Classifies question complexity (L1-L5) and routes to appropriate response path. This functionality is built into the ask-question skill. Do NOT invoke question-router separately — use ask-question instead, which includes these instructions inline.
Resume an interrupted analysis pipeline by reading pipeline state and continuing from the next ready agents. Triggered when users say "/resume-analysis", "continue previous analysis", or "resume the pipeline".
USE THIS SKILL for full end-to-end analytical pipelines, presentation decks, or deep investigations. Triggers when the user says 'run analysis', 'full pipeline', 'end-to-end', 'build me a deck', 'give me the full picture', 'comprehensive analysis', or any request for a polished slide deck with charts. Also use when ask-question classifies a question as L5. This skill orchestrates 18 specialized agents in a DAG pipeline — from framing through charting to a finished Marp deck. Do NOT attempt to build presentations or run multi-agent analysis workflows without this skill.
EMBEDDED — Runs 4-layer validation stack on analysis findings. This functionality is built into the ask-question and run-analysis skill. Do NOT invoke semantic-validation separately — use ask-question and run-analysis instead, which includes these instructions inline.
USE THIS SKILL when a user wants to set up, configure, or get started with the AI Analyst. Triggers on 'set up', 'get started', 'configure', '/setup', 'onboard me', or any first-time setup request. Also use when the user opens a new session and hasn't configured their profile yet — if you detect no .knowledge/ directory or no profile.md, proactively suggest running setup. This skill runs a conversational 4-phase interview that configures the analytical environment: role & expertise, data connection, business context, and output preferences.
Estimate the business impact and financial value of a given opportunity. Invokes the opportunity-sizer agent to break down addressable market, conversion potential, and revenue impact. Triggered when users ask "how much is this worth?", "size the opportunity", "business impact analysis", or invoke `/size-opportunity`.
Adapt analytical findings to the audience — same insight, different framing, detail level, and format depending on who will read it. Triggered when users specify an audience or say "adapt for executives", "prepare for engineering", or invoke `/stakeholder-comms`.
Change the active dataset. Updates the active pointer, validates the target dataset exists, and confirms with a summary. Triggered when users say "switch to dataset", "change dataset", or invoke `/switch-dataset`.
EMBEDDED — Identifies missing data and produces instrumentation requests. This functionality is built into the ask-question and run-analysis skill. Do NOT invoke tracking-gaps separately — use ask-question and run-analysis instead, which includes these instructions inline.
EMBEDDED — Cross-references findings against multiple data sources. This functionality is built into the ask-question and run-analysis skill. Do NOT invoke triangulation separately — use ask-question and run-analysis instead, which includes these instructions inline.
Browse and search past analyses from the analysis archive. Helps users recall what they've analyzed before and find prior findings. Triggered when users say "/history", "what have I analyzed before?", or "show my analysis history".
Triggers on "metrics", "KPIs", "metric dictionary", "/metrics".
EMBEDDED — Ensures charts follow SWD design standards. This functionality is built into the ask-question, explore-data, and run-analysis skill. Do NOT invoke visualization-patterns separately — use ask-question, explore-data, and run-analysis instead, which includes these instructions inline.
Ask a business question in plain English. Get validated findings, publication-quality charts, and a slide deck.
https://github.com/ai-analyst-lab/ai-analyst-pluginThat's it — the AI Analyst plugin is now available in your Cowork sessions.
First time? Say "help me set up" to connect your data and configure your preferences.
Try it: Just ask a question like "Which channel has the highest churn and why?" or "Show me MRR trends over time"
Write SQL, explore datasets, and generate insights faster. Build visualizations and dashboards, and turn raw data into clear stories for stakeholders.
Uses power tools
Uses Bash, Write, or Edit tools
Share bugs, ideas, or general feedback.
Data analytics skills for PMs: SQL query generation and cohort analysis. Analyze user data, generate queries, and identify retention patterns.
Analytics pipeline orchestrator covering instrumentation, modeling, and dashboards
Business analysis with data storytelling and KPI dashboard design
Use this agent when analyzing metrics, generating insights from data, creating performance reports, or making data-driven recommendations. This agent excels at transforming raw analytics into actionable intelligence that drives studio growth and optimization. Examples:\n\n<example>\nContext: Monthly performance review needed
Amplitude-powered analytics skills — analyze dashboards, charts, experiments, feedback, and account health with AI.