Subagent delegation for data analysis. Dispatches fresh Task agents with output-first verification.
/plugin marketplace add edwinhu/workflows/plugin install workflows@edwinhu-pluginsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Announce: "I'm using ds-delegate to dispatch analysis subagents."
EVERY ANALYSIS STEP MUST GO THROUGH A TASK AGENT. This is not negotiable.
Main chat MUST NOT:
If you're about to write analysis code in main chat, STOP. Spawn a Task agent instead. </EXTREMELY-IMPORTANT>
Fresh subagent per task + output-first verification = reliable analysis
Called by ds-implement for each task in PLAN.md. Don't invoke directly.
For each task:
1. Dispatch analyst subagent
- If questions → answer, re-dispatch
- Implements with output-first protocol
2. Verify outputs are present and reasonable
3. Dispatch methodology reviewer (if complex)
4. Mark task complete, log to LEARNINGS.md
Pattern: Use structured delegation template from common/templates/delegation-template.md
Every delegation MUST include:
Use this Task invocation (fill in brackets):
Task(subagent_type="general-purpose", prompt="""
# TASK
Analyze: [TASK NAME]
## EXPECTED OUTCOME
You will have successfully completed this task when:
- [ ] [Specific analysis output 1]
- [ ] [Specific analysis output 2]
- [ ] Output-first verification at each step
- [ ] Results documented with evidence
## REQUIRED SKILLS
This task requires:
- [Statistical method]: [Why needed]
- [Programming language]: Data manipulation
- Output-first verification (mandatory)
## REQUIRED TOOLS
You will need:
- Read: Load datasets and existing code
- Write: Create analysis scripts/notebooks
- Bash: Run analysis and verify outputs
**Tools denied:** None (full analysis access)
## MUST DO
- [ ] Print state BEFORE each operation (shape, head)
- [ ] Print state AFTER each operation (nulls, sample)
- [ ] Verify outputs are reasonable at each step
- [ ] Document methodology decisions
## MUST NOT DO
- ❌ Skip verification outputs
- ❌ Proceed with questionable data without flagging
- ❌ Guess on methodology (ask if unclear)
- ❌ Claim completion without visible outputs
## CONTEXT
### Task Description
[PASTE FULL TASK TEXT FROM PLAN.md]
### Analysis Context
- Analysis objective: [from SPEC.md]
- Data sources: [list with paths]
- Previous steps: [summary from LEARNINGS.md]
## Output-First Protocol (MANDATORY)
For EVERY operation:
1. Print state BEFORE (shape, head)
2. Execute operation
3. Print state AFTER (shape, nulls, sample)
4. Verify output is reasonable
Example:
```python
print(f"Before: {df.shape}")
df = df.merge(other, on='key')
print(f"After: {df.shape}")
print(f"Nulls introduced: {df.isnull().sum().sum()}")
df.head()
| Operation | Required Output |
|---|---|
| Load data | shape, dtypes, head() |
| Filter | shape before/after, % removed |
| Merge/Join | shape, null check, sample |
| Groupby | result shape, sample groups |
| Model fit | metrics, convergence |
Ask questions BEFORE implementing. Don't guess on methodology.
Report: what you did, key outputs observed, any data quality issues found. """)
**If analyst asks questions:** Answer clearly, especially about methodology choices.
**If analyst finishes:** Verify outputs, then proceed or review.
## Step 2: Verify Outputs
Before moving on, confirm:
- [ ] Output files/variables exist
- [ ] Shapes are reasonable (no unexpected row loss)
- [ ] No silent null introduction
- [ ] Sample output matches expectations
If verification fails, re-dispatch analyst with specific fix instructions.
## Step 3: Dispatch Methodology Reviewer (Complex Tasks)
For statistical analysis, modeling, or methodology-sensitive tasks:
Task(subagent_type="general-purpose", prompt=""" Review methodology for: [TASK NAME]
[SUMMARY FROM ANALYST OUTPUT]
[FROM SPEC.md - especially any replication requirements]
The analyst may have:
DO:
Rate each issue 0-100. Only report issues >= 80 confidence.
## Step 4: Log to LEARNINGS.md
After each task, append to `.claude/LEARNINGS.md`:
```markdown
## Task N: [Name] - COMPLETE
**Input:** [describe input state]
**Operation:** [what was done]
**Output:**
- Shape: [final shape]
- Key findings: [observations]
**Verification:**
- [how you confirmed it worked]
**Next:** [what comes next]
When you say "Step complete", you are asserting:
If ANY of these didn't happen, you are not "summarizing" - you are LYING about the state of the analysis.
Dishonest claims corrupt research. Honest "investigating" maintains integrity. </EXTREMELY-IMPORTANT>
These thoughts mean STOP—you're about to skip delegation:
| Thought | Reality |
|---|---|
| "I'll just check the shape quickly" | Shape checks need output-first protocol. Delegate. |
| "It's just a simple merge" | Merges fail silently. Delegate with verification. |
| "I already know this data" | Knowing ≠ verified. Delegate. |
| "The subagent will be slower" | Wrong results are slower than slow results. Delegate. |
| "Just this one plot" | One plot hides data issues. Delegate. |
| "User wants results fast" | User wants CORRECT results. Delegate. |
| "Skip methodology review, it's standard" | "Standard" assumptions often fail. Review. |
| "Output looked reasonable" | "Looked reasonable" ≠ verified. Check numbers. |
Never:
If analyst produces no visible output:
If analyst fails task:
Me: Implementing Task 1: Load and clean transaction data
[Dispatch analyst with full task text]
Analyst:
- Loaded transactions.csv: (50000, 12)
- Found 5% nulls in amount column
- "Should I drop or impute nulls?"
Me: "Impute with median, flag imputed rows"
[Re-dispatch with answer]
Analyst:
- Imputed 2,500 rows with median ($45.50)
- Added is_imputed flag column
- Final shape: (50000, 13)
- Sample output: [shows head with flag]
[Verify: shapes match, flag exists, no unexpected changes]
[Log to LEARNINGS.md]
[Mark Task 1 complete, move to Task 2]
This skill is invoked by ds-implement during the output-first implementation phase.
After all tasks complete, ds-implement proceeds to ds-review.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.