From workflows
Subagent delegation for data analysis. Dispatches fresh Task agents with output-first verification.
npx claudepluginhub edwinhu/workflows --plugin workflowsThis skill uses the workspace's default tool permissions.
- [The Iron Law of Delegation](#the-iron-law-of-delegation)
Implements Playwright E2E testing patterns: Page Object Model, test organization, configuration, reporters, artifacts, and CI/CD integration for stable suites.
Guides Next.js 16+ Turbopack for faster dev via incremental bundling, FS caching, and HMR; covers webpack comparison, bundle analysis, and production builds.
Discovers and evaluates Laravel packages via LaraPlugins.io MCP. Searches by keyword/feature, filters by health score, Laravel/PHP compatibility; fetches details, metrics, and version history.
YOU MUST route EVERY ANALYSIS STEP THROUGH A TASK AGENT. This is not negotiable.
You MUST NOT:
If you're about to write analysis code in main chat, STOP. Spawn a Task agent instead.
Fresh subagent per task + output-first verification = reliable analysis
Called by ds-implement for each task in PLAN.md. Don't invoke directly.
For each task:
1. Dispatch analyst subagent
- If questions → answer, re-dispatch
- Implements with output-first protocol
2. Verify outputs are present and reasonable
3. Dispatch methodology reviewer (if complex)
4. Mark task complete, log to LEARNINGS.md
Each task in PLAN.md should have a type field. Detect and route accordingly:
| Task Type | Agent | Constraints | Example Tasks |
|---|---|---|---|
engineering | workflows:ds-engineer | ds-engineering-constraints.md index + atomic E1-E5 files | ETL, merge, clean, transform, pipeline, schema, join |
analysis | workflows:ds-analyst | ds-analysis-constraints.md index + atomic A1-A7 files | regression, test, model, visualize, estimate, summarize |
Detection heuristic (when type field is missing):
| Task contains these keywords | Type |
|---|---|
| merge, join, clean, ETL, transform, pipeline, ingest, schema, deduplicate, normalize | engineering |
| regression, estimate, test, model, plot, chart, visualize, summarize, correlate, panel | analysis |
| ambiguous | Default to analysis (safer — analysis constraints are stricter) |
Pattern: Use structured delegation template from references/delegation-template.md
Every delegation MUST include:
Use this Task invocation (fill in brackets). Route based on task type detected above:
All paths below are relative to this skill's base directory.
For analysis tasks:
Task(subagent_type="workflows:ds-analyst", prompt="""
# TASK
Analyze: [TASK NAME]
## EXPECTED OUTCOME
You will have successfully completed this task when:
- [ ] [Specific analysis output 1]
- [ ] [Specific analysis output 2]
- [ ] Output-first verification at each step
- [ ] Results documented with evidence
## REQUIRED SKILLS
This task requires:
- [Statistical method]: [Why needed]
- [Programming language]: Data manipulation
- Output-first verification (mandatory)
- SQL reference: Read `../ds-delegate/references/sql-patterns.md` for dialect-specific patterns
- Data quality checks: Read `../ds-implement/references/ds-checks.md` for DQ1-DQ6 verification patterns (mandatory)
- Analysis constraints: Read `${CLAUDE_SKILL_DIR}/../../references/constraints/ds-analysis-constraints.md` for the constraint index, then load:
Read `${CLAUDE_SKILL_DIR}/../../references/constraints/ds-robustness-checks.md`
Read `${CLAUDE_SKILL_DIR}/../../references/constraints/ds-standard-error-spec.md`
Read `${CLAUDE_SKILL_DIR}/../../references/constraints/ds-visualization-integrity.md`
Read `${CLAUDE_SKILL_DIR}/../../references/constraints/ds-table-figure-pairing.md`
- Analysis conventions: Read `${CLAUDE_SKILL_DIR}/../../references/constraints/ds-common-conventions.md` for the convention index, then load:
Read `${CLAUDE_SKILL_DIR}/../../references/constraints/ds-statistical-validity.md`
Read `${CLAUDE_SKILL_DIR}/../../references/constraints/ds-p-hacking-prevention.md`
Read `${CLAUDE_SKILL_DIR}/../../references/constraints/ds-sample-selection.md`
Read `${CLAUDE_SKILL_DIR}/../../references/constraints/ds-deviation-rules-analysis.md`
## REQUIRED TOOLS
You will need:
- Read: Load datasets and existing code
- Write: Create analysis scripts/notebooks
- Bash: Run analysis and verify outputs
**Tools denied:** None (full analysis access)
## MUST DO
- [ ] Print state BEFORE each operation (shape, head)
- [ ] Print state AFTER each operation (nulls, sample)
- [ ] Verify outputs are reasonable at each step
- [ ] Document methodology decisions
## MUST NOT DO
- ❌ Skip verification outputs
- ❌ Proceed with questionable data without flagging
- ❌ Guess on methodology (ask if unclear)
- ❌ Claim completion without visible outputs
## CONTEXT
### Task Description
[PASTE FULL TASK TEXT FROM PLAN.md]
### Analysis Context
- Analysis objective: [from SPEC.md]
- Data sources: [list with paths]
- Previous steps: [summary from LEARNINGS.md]
## Output-First Protocol (MANDATORY)
For EVERY operation:
1. Print state BEFORE (shape, head)
2. Execute operation
3. Print state AFTER (shape, nulls, sample)
4. Verify output is reasonable
Example:
```python
print(f"Before: {df.shape}")
df = df.merge(other, on='key')
print(f"After: {df.shape}")
print(f"Nulls introduced: {df.isnull().sum().sum()}")
df.head()
| Operation | Required Output |
|---|---|
| Load data | shape, dtypes, head() |
| Filter | shape before/after, % removed |
| Merge/Join | shape, null check, sample |
| Groupby | result shape, sample groups |
| Model fit | metrics, convergence |
Ask questions BEFORE implementing. Don't guess on methodology.
Report: what you did, key outputs observed, any data quality issues found. """)
**For `engineering` tasks:**
Task(subagent_type="workflows:ds-engineer", prompt="""
Engineer: [TASK NAME]
You will have successfully completed this task when:
This task requires:
../ds-delegate/references/sql-patterns.md for dialect-specific patterns../ds-implement/references/ds-checks.md for DQ1-DQ6 verification patterns (mandatory)${CLAUDE_SKILL_DIR}/../../references/constraints/ds-engineering-constraints.md for the constraint index, then load:
Read ${CLAUDE_SKILL_DIR}/../../references/constraints/ds-determinism.md
Read ${CLAUDE_SKILL_DIR}/../../references/constraints/ds-schema-contracts.md
Read ${CLAUDE_SKILL_DIR}/../../references/constraints/ds-join-audits.md
Read ${CLAUDE_SKILL_DIR}/../../references/constraints/ds-idempotency.md
Read ${CLAUDE_SKILL_DIR}/../../references/constraints/ds-error-handling.mdYou will need:
Tools denied: None (full engineering access)
[PASTE FULL TASK TEXT FROM PLAN.md]
For EVERY operation:
Example:
print(f"Before: {df.shape}")
df = df.merge(other, on='key')
print(f"After: {df.shape}")
print(f"Nulls introduced: {df.isnull().sum().sum()}")
df.head()
| Operation | Required Output |
|---|---|
| Load data | shape, dtypes, head() |
| Filter | shape before/after, % removed |
| Merge/Join | shape, null check, key uniqueness |
| Transform | before/after sample, determinism check |
| Pipeline step | input shape → output shape, schema validation |
Ask questions BEFORE implementing. Don't guess on architecture.
Report: what you did, key outputs observed, any data quality or schema issues found. """)
**If agent asks questions:** Answer clearly, especially about methodology choices (analysis) or architecture decisions (engineering).
**If agent completes task:** Verify outputs, then proceed or review.
## Step 2: Verify Outputs (Post-Subagent Boundary)
<EXTREMELY-IMPORTANT>
**After analyst returns, you are at the post-subagent boundary. Constraints C5 from ds-common-constraints.md apply.**
**ALLOWED (Verification):**
- [ ] Read the analyst's returned report/summary
- [ ] Check LEARNINGS.md for output documentation
- [ ] Confirm output files exist (`ls -la`)
- [ ] Compare task counts (expected vs actual)
**FORBIDDEN (Investigation):**
- ❌ Read project source code, notebooks, or data files
- ❌ Run analysis code to "confirm" results
- ❌ Query databases or inspect intermediate files
- ❌ Grep/Glob project files
**If the analyst's report shows problems, re-dispatch a Task agent. Do NOT investigate yourself.**
</EXTREMELY-IMPORTANT>
Upon verification failure, re-dispatch analyst with specific fix instructions.
## Step 3: Dispatch Methodology Reviewer (Complex Tasks)
For statistical analysis, modeling, or methodology-sensitive tasks, dispatch a methodology reviewer. **Tailor the review checklist to the task type:**
Task(subagent_type="general-purpose", allowed_tools=["Read", "Glob", "Grep", "Bash(read-only)"], prompt=""" Review methodology for: [TASK NAME] Task type: [engineering | analysis]
[SUMMARY FROM ANALYST/ENGINEER OUTPUT]
[FROM SPEC.md - especially any replication requirements]
Tool Restrictions: The methodology reviewer is READ-ONLY. It reads code, verifies outputs, and returns a verdict. It MUST NOT use Write or Edit.
The agent may have:
DO:
Use this checklist when task type is engineering:
Use this checklist when task type is analysis:
Rate each issue 0-100. Only report issues >= 80 confidence.
## Step 4: Log to LEARNINGS.md
Append to `.planning/LEARNINGS.md` after each task:
```markdown
## Task N: [Name] - COMPLETE
**Input:** [describe input state]
**Operation:** [what was done]
**Output:**
- Shape: [final shape]
- Key findings: [observations]
**Verification:**
- [how you confirmed it worked]
**Next:** [what comes next]
Checkpoint type: human-verify (task completion is machine-verifiable)
Before marking any task as complete, execute this gate:
1. IDENTIFY → What proves this task is done?
- Task agent returned output (not just "done")
- Output matches PLAN.md expected output for this task
2. RUN → Read the agent's actual output (not just the summary)
3. READ → Verify: shapes reasonable? No unexpected nulls? Sample looks correct?
4. VERIFY → If statistical task: methodology reviewer approved
5. CLAIM → Only log "Task N: COMPLETE" in LEARNINGS.md if ALL checks pass
If agent returned no visible output, this gate FAILS. Re-dispatch with explicit output requirements.
Skipping output verification is NOT HELPFUL — unverified results lead the user to act on wrong analysis.
When you say "Step complete", you are asserting:
If ANY of these didn't happen, you are not "summarizing" — you are being anti-helpful by giving the user false confidence in unverified work.
Unverified claims waste the user's time and corrupt their research. Verified "investigating" protects their work.
Recognize these thoughts as signals to stop and delegate instead:
| Excuse | Reality | Do Instead |
|---|---|---|
| "I'll just check the shape quickly" | You'll skip the output-first protocol | Delegate to Task agent with full verification |
| "It's just a simple merge" | Your merges fail silently | Delegate with verification requirements |
| "I already know this data" | Your knowing ≠ verified | Delegate anyway with output-first protocol |
| "The subagent will be slower" | Wrong results are slower than slow results | Delegate — correctness beats speed |
| "Just this one plot" | You're hiding data issues with one plot | Delegate with full output requirements |
| "User wants results fast" | They want CORRECT results | Delegate — optimize for correctness, not speed |
| "Skip methodology review, it's standard" | Your "standard" assumptions often fail | Dispatch methodology reviewer anyway |
| "Output looked reasonable" | "Looked reasonable" ≠ verified | Check the actual numbers against expectations |
| Shortcut | Consequence |
|---|---|
| Delegating without context | You spawned a task agent without SPEC/PLAN context. It guesses wrong — your delegation created confusion. |
| Skipping verification of agent output | You trusted the agent's claim of completion. The output is wrong — your trust was negligence. |
If you wrote analysis code in the main chat instead of delegating to a task agent, DELETE it immediately and dispatch a Task agent.
Code written in main chat is contaminated by orchestrator context, skips the output-first protocol, and bypasses methodology review. It cannot be salvaged — it must be replaced.
If you catch yourself thinking these, STOP immediately:
When analyst produces no visible output:
When analyst fails a task:
Me: Implementing Task 1: Load and clean transaction data
[Dispatch analyst with full task text]
Analyst:
- Loaded transactions.csv: (50000, 12)
- Found 5% nulls in amount column
- "Should I drop or impute nulls?"
Me: "Impute with median, flag imputed rows"
[Re-dispatch with answer]
Analyst:
- Imputed 2,500 rows with median ($45.50)
- Added is_imputed flag column
- Final shape: (50000, 13)
- Sample output: [shows head with flag]
[Verify: shapes match, flag exists, no unexpected changes]
[Log to LEARNINGS.md]
[Mark Task 1 complete, move to Task 2]
When dispatching subagents, match model capability to task complexity. This is advisory -- Claude Code doesn't yet support model routing -- but documents intent for cost-aware delegation.
| Task Complexity | Model Tier | Signals | Example |
|---|---|---|---|
| Mechanical | Cheapest capable | Data loading, simple filtering, descriptive stats, file format conversion | "Load CSV and compute summary statistics" |
| Integration | Standard | Merges/joins across sources, aggregations, visualization, data reshaping | "Merge transaction and customer tables, create pivot summary" |
| Architecture/Review | Most capable | Feature engineering strategy, model selection, statistical assumption validation, methodology review | "Select appropriate model family and validate distributional assumptions" |
Complexity signals:
When in doubt, use the standard tier. Over-allocating is wasteful; under-allocating produces poor results.
This skill is invoked by ds-implement during the output-first implementation phase.
After all tasks complete, ds-implement proceeds to ds-review.