Use when facing 3+ logically independent failures (different features, different root causes) that can be investigated concurrently - dispatches multiple agents to investigate in parallel; requires either parallel-safe test infrastructure OR sequential fix implementation
Dispatch multiple agents to investigate 3+ independent failures concurrently. Use when you have different test files or subsystems failing with separate root causes that can be analyzed in parallel.
/plugin marketplace add samjhecht/wrangler/plugin install wrangler@samjhecht-pluginsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
MANDATORY: When using this skill, announce it at the start with:
š§ Using Skill: dispatching-parallel-agents | [brief purpose based on context]
Example:
š§ Using Skill: dispatching-parallel-agents | [Provide context-specific example of what you're doing]
This creates an audit trail showing which skills were applied during the session.
When you have multiple unrelated failures (different test files, different subsystems, different bugs), investigating them sequentially wastes time. Each investigation is independent and can happen in parallel.
Core principle: Dispatch one agent per independent problem domain. Let them work concurrently.
digraph when_to_use {
"Multiple failures?" [shape=diamond];
"Are they independent?" [shape=diamond];
"Single agent investigates all" [shape=box];
"One agent per problem domain" [shape=box];
"Can they work in parallel?" [shape=diamond];
"Sequential agents" [shape=box];
"Parallel dispatch" [shape=box];
"Multiple failures?" -> "Are they independent?" [label="yes"];
"Are they independent?" -> "Single agent investigates all" [label="no - related"];
"Are they independent?" -> "Can they work in parallel?" [label="yes"];
"Can they work in parallel?" -> "Parallel dispatch" [label="yes"];
"Can they work in parallel?" -> "Sequential agents" [label="no - shared state"];
}
Use when:
Don't use when:
MUST be true before using parallel agents:
Each failure can be investigated without knowing about the others:
Independent failures:
NOT independent:
Subagents can work concurrently without interfering:
Parallel-safe investigation:
NOT parallel-safe:
If failures are independent BUT not parallel-safe: You can STILL use this pattern, with modifications:
If failures are NOT independent: Do NOT use parallel agents. Use systematic-debugging to find root cause.
Do you have 3+ failures?
āā NO ā Use systematic-debugging (single failure investigation)
āā YES ā Continue
Are failures logically independent?
(Can each be investigated without knowing about others?)
āā NO ā Use systematic-debugging (find common root cause)
āā YES ā Continue
Is investigation parallel-safe?
(Can subagents work concurrently without interfering?)
āā YES ā Dispatch parallel agents ā
āā NO ā Two options:
A) Dispatch for investigation only, fix sequentially
B) Set up isolated test environments, then dispatch
Group failures by what's broken:
Each domain is independent - fixing tool approval doesn't affect abort tests.
Each agent gets:
// In Claude Code / AI environment
Task("Fix agent-tool-abort.test.ts failures")
Task("Fix batch-completion-behavior.test.ts failures")
Task("Fix tool-approval-race-conditions.test.ts failures")
// All three run concurrently
When agents return:
Good agent prompts are:
Fix the 3 failing tests in src/agents/agent-tool-abort.test.ts:
1. "should abort tool with partial output capture" - expects 'interrupted at' in message
2. "should handle mixed completed and aborted tools" - fast tool aborted instead of completed
3. "should properly track pendingToolCount" - expects 3 results but gets 0
These are timing/race condition issues. Your task:
1. Read the test file and understand what each test verifies
2. Identify root cause - timing issues or actual bugs?
3. Fix by:
- Replacing arbitrary timeouts with event-based waiting
- Fixing bugs in abort implementation if found
- Adjusting test expectations if testing changed behavior
Do NOT just increase timeouts - find the real issue.
Return: Summary of what you found and what you fixed.
ā Too broad: "Fix all the tests" - agent gets lost ā Specific: "Fix agent-tool-abort.test.ts" - focused scope
ā No context: "Fix the race condition" - agent doesn't know where ā Context: Paste the error messages and test names
ā No constraints: Agent might refactor everything ā Constraints: "Do NOT change production code" or "Fix tests only"
ā Vague output: "Fix it" - you don't know what changed ā Specific: "Return summary of root cause and changes"
Failures:
Check independence:
Check parallel safety:
Decision: Dispatch parallel agents
Failures:
Check independence:
Check parallel safety:
beforeEach(() => resetDatabase()) (race condition)Decision: Use modified approach:
Failures:
Check independence:
Decision: Do NOT dispatch parallel agents. Use systematic-debugging to find root cause in auth system first.
Related failures: Fixing one might fix others - investigate together first Need full context: Understanding requires seeing entire system Exploratory debugging: You don't know what's broken yet Shared state: Agents would interfere (editing same files, using same resources)
Scenario: 6 test failures across 3 files after major refactoring
Failures:
Decision: Independent domains - abort logic separate from batch completion separate from race conditions
Dispatch:
Agent 1 ā Fix agent-tool-abort.test.ts
Agent 2 ā Fix batch-completion-behavior.test.ts
Agent 3 ā Fix tool-approval-race-conditions.test.ts
Results:
Integration: All fixes independent, no conflicts, full suite green
Time saved: 3 problems solved in parallel vs sequentially
After agents return:
From debugging session (2025-10-03):
Optimize Bazel builds for large-scale monorepos. Use when configuring Bazel, implementing remote execution, or optimizing build performance for enterprise codebases.