Use when facing 3+ independent failures that can be investigated without shared state or dependencies - dispatches multiple Claude agents to investigate and fix independent problems concurrently
/plugin marketplace add withzombies/hyperpowers/plugin install withzombies-hyper@withzombies-hyperThis skill inherits all available tools. When active, it can use any tool Claude has access to.
<skill_overview> When facing 3+ independent failures, dispatch one agent per problem domain to investigate concurrently; verify independence first, dispatch all in single message, wait for all agents, check conflicts, verify integration. </skill_overview>
<rigidity_level> MEDIUM FREEDOM - Follow the 6-step process (identify, create tasks, dispatch, monitor, review, verify) strictly. Independence verification mandatory. Parallel dispatch in single message required. Adapt agent prompt content to problem domain. </rigidity_level>
<quick_reference>
| Step | Action | Critical Rule |
|---|---|---|
| 1. Identify Domains | Test independence (fix A doesn't affect B) | 3+ independent domains required |
| 2. Create Agent Tasks | Write focused prompts (scope, goal, constraints, output) | One prompt per domain |
| 3. Dispatch Agents | Launch all agents in SINGLE message | Multiple Task() calls in parallel |
| 4. Monitor Progress | Track completions, don't integrate until ALL done | Wait for all agents |
| 5. Review Results | Read summaries, check conflicts | Manual conflict resolution |
| 6. Verify Integration | Run full test suite | Use verification-before-completion |
Why 3+? With only 2 failures, coordination overhead often exceeds sequential time.
Critical: Dispatch all agents in single message with multiple Task() calls, or they run sequentially. </quick_reference>
<when_to_use> Use when:
Don't use when:
<the_process>
Announce: "I'm using hyperpowers:dispatching-parallel-agents to investigate these independent failures concurrently."
Create TodoWrite tracker:
- Identify independent domains (3+ domains identified)
- Create agent tasks (one prompt per domain drafted)
- Dispatch agents in parallel (all agents launched in single message)
- Monitor agent progress (track completions)
- Review results (summaries read, conflicts checked)
- Verify integration (full test suite green)
Test for independence:
Ask: "If I fix failure A, does it affect failure B?"
Check: "Do failures touch same code/files?"
Verify: "Do failures share error patterns?"
Example independence check:
Failure 1: Authentication tests failing (auth.test.ts)
Failure 2: Database query tests failing (db.test.ts)
Failure 3: API endpoint tests failing (api.test.ts)
Check: Does fixing auth affect db queries? NO
Check: Does fixing db affect API? YES - API uses db
Result: 2 independent domains:
Domain 1: Authentication (auth.test.ts)
Domain 2: Database + API (db.test.ts + api.test.ts together)
Group failures by what's broken:
Each agent prompt must have:
Good agent prompt example:
Fix the 3 failing tests in src/agents/agent-tool-abort.test.ts:
1. "should abort tool with partial output capture" - expects 'interrupted at' in message
2. "should handle mixed completed and aborted tools" - fast tool aborted instead of completed
3. "should properly track pendingToolCount" - expects 3 results but gets 0
These are timing/race condition issues. Your task:
1. Read the test file and understand what each test verifies
2. Identify root cause - timing issues or actual bugs?
3. Fix by:
- Replacing arbitrary timeouts with event-based waiting
- Fixing bugs in abort implementation if found
- Adjusting test expectations if testing changed behavior
Never just increase timeouts - find the real issue.
Return: Summary of what you found and what you fixed.
What makes this good:
Common mistakes:
❌ Too broad: "Fix all the tests" - agent gets lost ✅ Specific: "Fix agent-tool-abort.test.ts" - focused scope
❌ No context: "Fix the race condition" - agent doesn't know where ✅ Context: Paste the error messages and test names
❌ No constraints: Agent might refactor everything ✅ Constraints: "Do NOT change production code" or "Fix tests only"
❌ Vague output: "Fix it" - you don't know what changed ✅ Specific: "Return summary of root cause and changes"
CRITICAL: You must dispatch all agents in a SINGLE message with multiple Task() calls.
// ✅ CORRECT - Single message with multiple parallel tasks
Task("Fix agent-tool-abort.test.ts failures", prompt1)
Task("Fix batch-completion-behavior.test.ts failures", prompt2)
Task("Fix tool-approval-race-conditions.test.ts failures", prompt3)
// All three run concurrently
// ❌ WRONG - Sequential messages
Task("Fix agent-tool-abort.test.ts failures", prompt1)
// Wait for response
Task("Fix batch-completion-behavior.test.ts failures", prompt2)
// This is sequential, not parallel!
After dispatch:
As agents work:
If an agent gets stuck (>5 minutes):
When all agents return:
Read each summary carefully
Check for conflicts
Integration strategy:
Document what happened
Run full test suite:
Before completing:
# Run all tests
npm test # or cargo test, pytest, etc.
# Verify output
# If all pass → Mark "Verify integration" complete
# If failures → Identify which agent's change caused regression
</the_process>
<examples> <example> <scenario>Developer dispatches agents sequentially instead of in parallel</scenario> <code> # Developer sees 3 independent failures # Creates 3 agent promptsTask("Fix agent-tool-abort.test.ts failures", prompt1)
Task("Fix batch-completion-behavior.test.ts failures", prompt2)
Task("Fix tool-approval-race-conditions.test.ts failures", prompt3)
<why_it_fails>
// Single message with multiple Task() calls
Task("Fix agent-tool-abort.test.ts failures", `
Fix the 3 failing tests in src/agents/agent-tool-abort.test.ts:
[prompt 1 content]
`)
Task("Fix batch-completion-behavior.test.ts failures", `
Fix the 2 failing tests in src/agents/batch-completion-behavior.test.ts:
[prompt 2 content]
`)
Task("Fix tool-approval-race-conditions.test.ts failures", `
Fix the 1 failing test in src/agents/tool-approval-race-conditions.test.ts:
[prompt 3 content]
`)
// All three run concurrently - THIS IS THE KEY
What happens:
What you gain:
<why_it_fails>
Check: Does fixing API affect database queries?
- API uses database
- If database schema changes, API breaks
- YES - these are related
Check: Does fixing database affect cache?
- Cache stores database results
- If database schema changes, cache keys break
- YES - these are related
Check: Do failures share error patterns?
- All mention "column not found: user_email"
- All started after schema migration
- YES - shared root cause
Result: NOT INDEPENDENT
These are one problem (schema change) manifesting in 3 places
Correct approach:
Single agent investigates: "Schema migration broke 3 subsystems"
Agent prompt:
"We have 3 test failures all related to schema change:
1. API endpoints: column not found
2. Database queries: column not found
3. Cache invalidation: old keys
Investigate the schema migration that caused this.
Fix by updating all 3 subsystems consistently.
Return: What changed in schema, how you fixed each subsystem."
# One agent sees full picture
# Makes consistent fix across all 3 areas
# No conflicts, proper integration
What you gain:
Agent 1: "Fixed timeout issue by increasing wait time to 5000ms" Agent 2: "Fixed race condition by adding mutex lock" Agent 3: "Fixed timing issue by reducing wait time to 1000ms"
<why_it_fails>
## Agent Summaries Review
Agent 1: Fixed timeout issue by increasing wait time to 5000ms
- File: src/agents/tool-executor.ts
- Change: DEFAULT_TIMEOUT = 5000
Agent 2: Fixed race condition by adding mutex lock
- File: src/agents/tool-executor.ts
- Change: Added mutex around tool execution
Agent 3: Fixed timing issue by reducing wait time to 1000ms
- File: src/agents/tool-executor.ts
- Change: DEFAULT_TIMEOUT = 1000
## Conflict Analysis
**CONFLICT DETECTED:**
- Agents 1 and 3 edited same file (tool-executor.ts)
- Agents 1 and 3 changed same constant (DEFAULT_TIMEOUT)
- Agent 1: increase to 5000ms
- Agent 3: decrease to 1000ms
- Contradictory assumptions about correct timing
**Why conflict occurred:**
- Domains weren't actually independent (same timeout constant)
- Both agents tested locally, didn't see interaction
- Different problem spaces led to different timing needs
## Resolution
**Option 1:** Different timeouts for different operations
```typescript
const TOOL_EXECUTION_TIMEOUT = 5000 // Agent 1's need
const TOOL_APPROVAL_TIMEOUT = 1000 // Agent 3's need
Option 2: Investigate why timing varies
Choose Option 2 after investigation:
Integration steps:
**Run full test suite:**
```bash
npm test
# All tests pass ✅
What you gain:
<failure_modes>
Symptoms: No progress after 5+ minutes
Causes:
Recovery:
Symptoms: Agents edited same code differently, or made contradictory assumptions
Causes:
Recovery:
Symptoms: Fixed tests pass, but other tests now fail
Causes:
Recovery:
Symptoms: Fixing one domain revealed it affected another
Recovery:
<critical_rules>
All of these mean: STOP. Follow the process.
<verification_checklist> Before completing parallel agent work:
Can't check all boxes? Return to the process and complete missing steps. </verification_checklist>
<integration> **This skill covers:** Parallel investigation of independent failuresRelated skills:
This skill uses:
Workflow integration:
Multiple independent failures
↓
Verify independence (Step 1)
↓
Create agent tasks (Step 2)
↓
Dispatch in parallel (Step 3)
↓
Monitor progress (Step 4)
↓
Review + check conflicts (Step 5)
↓
Verify integration (Step 6)
↓
hyperpowers:verification-before-completion
Real example from session (2025-10-03):
When stuck:
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.