From workflows
This skill should be used when the user asks to 'start data analysis', 'brainstorm analysis approach', 'plan a data project', 'clarify analysis requirements', or needs the full 5-phase data science workflow with output-first verification.
npx claudepluginhub edwinhu/workflows --plugin workflowsThis skill is limited to using the following tools:
- [The Iron Law of DS Brainstorming](#the-iron-law-of-ds-brainstorming)
Implements Playwright E2E testing patterns: Page Object Model, test organization, configuration, reporters, artifacts, and CI/CD integration for stable suites.
Guides Next.js 16+ Turbopack for faster dev via incremental bundling, FS caching, and HMR; covers webpack comparison, bundle analysis, and production builds.
Discovers and evaluates Laravel packages via LaraPlugins.io MCP. Searches by keyword/feature, filters by health score, Laravel/PHP compatibility; fetches details, metrics, and version history.
Before starting, check for an existing handoff:
.planning/HANDOFF.md exists| Level | Remaining Context | Action |
|---|---|---|
| Normal | >35% | Proceed normally |
| Warning | 25-35% | Complete current question round, then trigger ds-handoff |
| Critical | ≤25% | Immediately trigger ds-handoff — do not start new question rounds |
Refine vague analysis requests into clear objectives through Socratic questioning. NO data exploration, NO coding - just questions and objectives.
Load shared enforcement first.
Auto-load all constraints matching applies-to: ds:
!uv run python3 ${CLAUDE_SKILL_DIR}/../../scripts/load-constraints.py ds
You MUST have these constraints loaded before proceeding. No claiming you "remember" them.
## The Iron Law of DS BrainstormingASK QUESTIONS BEFORE ANYTHING ELSE. This is not negotiable.
Before loading data, before exploring, before proposing approaches, you MUST:
STOP - You're about to load data or explore before asking questions. Don't do this.
| DO | DON'T |
|---|---|
| Ask clarifying questions | Load or explore data |
| Understand analysis objectives | Run queries |
| Identify data sources | Profile data (that's /ds-plan) |
| Define success criteria | Create visualizations |
| Ask about constraints | Write analysis code |
| Check if replicating existing analysis | Propose specific methodology |
Brainstorm answers: WHAT and WHY Plan answers: HOW (data profile + tasks) (separate skill)
Employ AskUserQuestion immediately:
When multiple analysis questions arise, batch them into ONE AskUserQuestion call:
Batched (fast — 1 round-trip):
AskUserQuestion(questions=[
{"question": "Primary dataset?", "options": [{"label": "CRSP"}, {"label": "Compustat"}, {"label": "Both merged"}]},
{"question": "Sample period?", "options": [{"label": "2000-2024"}, {"label": "2010-2024"}, {"label": "Custom"}]},
{"question": "Frequency?", "options": [{"label": "Monthly"}, {"label": "Quarterly"}, {"label": "Annual"}]}
])
When to batch: After understanding the research question, if 3+ independent questions arise, batch them. When NOT to batch: If a question's answer changes what other questions to ask (e.g., dataset choice affects available variables).
CRITICAL: Ask early if replicating existing work:
AskUserQuestion:
question: "Are you replicating or extending existing analysis?"
options:
- label: "Replicating existing"
description: "Must match specific methodology/results"
- label: "Extending existing"
description: "Building on prior work with modifications"
- label: "New analysis"
description: "Fresh analysis, methodology flexible"
When replicating:
After objectives are clear:
AskUserQuestion for the user to select the preferred approachAfter selecting an approach:
.planning/SPEC.md# Spec: [Analysis Name]
> **For Claude:** After writing this spec, discover and load the ds-plan skill for Phase 2:
>Read `${CLAUDE_SKILL_DIR}/../../skills/ds-plan/SKILL.md` and follow its instructions.
## Objective
[What question this analysis answers]
## Data Sources
- [Source 1]: [location, format, time period]
- [Source 2]: [location, format, time period]
## Requirements
Assign each requirement a unique ID using `CATEGORY-NN` format (e.g., `DATA-01`, `VIZ-02`, `STAT-03`). Categories come from natural groupings in the analysis.
| ID | Requirement | Scope |
|----|-------------|-------|
| [CAT-01] | [Requirement 1] | v1 |
| [CAT-02] | [Requirement 2] | v1 |
Scope: `v1` = must complete, `v2` = nice to have, `out-of-scope` = explicitly excluded.
## Success Criteria
- [ ] [CAT-01] [Criterion]
- [ ] [CAT-02] [Criterion]
## Constraints
- Replication: [yes/no - if yes, reference source]
- Timeline: [deadline]
- Methodology: [required approaches]
## Chosen Approach
[Description of selected approach]
## External Skills Likely In Play
<!-- List plugin skills whose data/tools will be touched. ds-plan Step 5b will Glob their references/ and examples/ before drafting tasks. -->
- [e.g. wrds — holdings/voting data via SAS on WRDS grid]
- [e.g. gemini-batch — LLM extraction for text fields]
- [none]
## Rejected Alternatives
- Option B: [why rejected]
- Option C: [why rejected]
Skipping the user interview is NOT HELPFUL — pattern-matching from similar requests produces wrong objectives, wasting the entire analysis. You are pattern-matching from similar-sounding requests, not understanding THIS specific analysis.
| Shortcut | Consequence |
|---|---|
| Skipping user interview | You skipped questions because you thought it was faster. Wrong objectives mean the entire analysis is wasted — you were anti-helpful. |
| Not gathering sources | You assumed you knew the data. Your assumptions produce wrong results — your confidence was negligence. |
| Excuse | Reality | Do Instead |
|---|---|---|
| "I already know what analysis is needed" | You're pattern-matching from similar-sounding requests, not understanding THIS one | Ask questions first |
| "The data will tell me what to do" | Data exploration without objectives is aimless — you'll profile everything and answer nothing | Define objectives first |
| "User seems impatient, skip to analysis" | Wrong results from skipped brainstorm waste more time than 3 questions | Ask the questions |
| "The request is clear enough" | Clear to YOU is not clear to the user — your assumptions ≠ their intent | Confirm with AskUserQuestion |
| "I'll refine objectives as I go" | You'll commit to an approach and rationalize the objective to fit | Lock objectives before exploring |
| Action | Why It's Wrong | Do Instead |
|---|---|---|
| Loading data | You're exploring before understanding goals | Ask what the user wants to learn |
| Running describe() | You're profiling data when that's for /ds-plan | Finish defining objectives first |
| Proposing specific models | You're jumping to HOW before clarifying WHAT | Define success criteria first |
| Creating task lists | You're planning before objectives are clear | Complete brainstorm first |
| Skipping replication question | You might miss critical methodology constraints | Always ask about replication upfront |
Checkpoint type: human-verify (SPEC.md content is machine-verifiable)
Before transitioning to ds-plan, execute this gate:
1. IDENTIFY → SPEC.md exists at `.planning/SPEC.md`
2. RUN → Read(".planning/SPEC.md")
3. READ → Verify it contains: Objectives, Data Sources, Requirements (with CATEGORY-NN IDs), Success Criteria sections
4. VERIFY → User has confirmed the objectives via AskUserQuestion response (not agent self-assessment).
Check: was AskUserQuestion called and did user respond affirmatively?
5. CLAIM → Only proceed to ds-plan if ALL checks pass
If ANY check fails, do NOT proceed. Fix the gap first.
Self-assessment is not user confirmation. If the user hasn't explicitly approved the objectives via AskUserQuestion, you haven't finished brainstorm.
Declare brainstorm complete when:
.planning/SPEC.md writtenThis skill is Phase 1 of the 5-phase /ds workflow:
┌──────────────┐ ┌──────────┐ ┌──────────────┐ ┌───────────┐ ┌───────────┐
│ ds-brainstorm│───→│ ds-plan │───→│ ds-implement │───→│ ds-review │───→│ ds-verify │
│ SPEC.md │ │ PLAN.md │ │ LEARNINGS.md │ │ APPROVED? │ │ COMPLETE? │
└──────────────┘ └──────────┘ └──────────────┘ └─────┬─────┘ └─────┬─────┘
↑ │ │
└── CHANGES REQ'D ───┘ │
↑ │
└──── NEEDS WORK ────────────────────┘
| Thought | Reality |
|---|---|
| "Should I ask if they want to continue?" | User already confirmed objectives. Asking again is stalling. |
| "Let me summarize what we agreed on" | SPEC.md IS the summary. Repeating it wastes context. |
| "Natural stopping point" | The workflow is sequential. Brainstorm done = plan starts. No gap. |
Your pause is procrastination disguised as courtesy. The user confirmed — move.
After writing SPEC.md, update it with structured frontmatter:
---
phase: ds-brainstorm
status: completed
implements: [all requirement IDs assigned in this phase]
requires: [user input]
provides: [.planning/SPEC.md]
affects: [.planning/]
tags: [brainstorm, objectives, requirements]
---
One-liner rule: Must be SUBSTANTIVE. Good: "Panel regression study of CEO pay-performance sensitivity using CRSP-Compustat 2000-2024". Bad: "Brainstorm complete".
After completing brainstorm, dispatch the spec reviewer before proceeding:
Phase 1: Brainstorm -> SPEC.md written
-> Dispatch ds-spec-reviewer subagent
-> If APPROVED -> proceed to ds-plan
-> If ISSUES_FOUND -> fix SPEC.md -> re-dispatch reviewer (max 5 iterations)
Step 1: Discover and load the spec reviewer skill:
Read ${CLAUDE_SKILL_DIR}/../../skills/ds-spec-reviewer/SKILL.md and follow its instructions.
Step 2: Only after reviewer returns APPROVED, discover and load the next phase:
Read ${CLAUDE_SKILL_DIR}/../../skills/ds-plan/SKILL.md and follow its instructions.
Fallback (if Read fails): /ds-plan
CRITICAL: Do not skip to analysis implementation. Phase 2 profiles data and breaks down the analysis into discrete, manageable tasks. CRITICAL: Do not skip spec review. An unreviewed spec means profiling the wrong data and planning the wrong analysis.