From crucible
Dispatches parallel agents for 2+ independent tasks without shared state or dependencies, like multiple failing test files or subsystems. Checks file overlaps before dispatch.
npx claudepluginhub raddue/crucibleThis skill uses the workspace's default tool permissions.
<!-- CANONICAL: shared/dispatch-convention.md -->
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Checks Next.js compilation errors using a running Turbopack dev server after code edits. Fixes actionable issues before reporting complete. Replaces `next build`.
Share bugs, ideas, or general feedback.
When you have multiple unrelated failures (different test files, different subsystems, different bugs), investigating them sequentially wastes time. Each investigation is independent and can happen in parallel.
Core principle: Dispatch one agent per independent problem domain. Let them work concurrently.
digraph when_to_use {
"Multiple failures?" [shape=diamond];
"Are they independent?" [shape=diamond];
"Single agent investigates all" [shape=box];
"One agent per problem domain" [shape=box];
"Can they work in parallel?" [shape=diamond];
"Sequential agents" [shape=box];
"Parallel dispatch" [shape=box];
"Multiple failures?" -> "Are they independent?" [label="yes"];
"Are they independent?" -> "Single agent investigates all" [label="no - related"];
"Are they independent?" -> "Can they work in parallel?" [label="yes"];
"Can they work in parallel?" -> "Parallel dispatch" [label="yes"];
"Can they work in parallel?" -> "Sequential agents" [label="no - shared state"];
}
Use when:
Don't use when:
Group failures by what's broken:
Each domain is independent - fixing tool approval doesn't affect abort tests.
Each agent gets:
Before spawning agents, determine which files each task will modify:
Task A → [src/agents/abort.ts, src/agents/abort.test.ts]
Task B → [src/batch/completion.ts, src/batch/completion.test.ts]
Task C → [src/agents/abort.ts, src/agents/approval.ts]
"Tasks A and C overlap on src/agents/abort.ts — sequencing C after A."
Do NOT proceed to dispatch until the file-touch map is clear and all overlaps are resolved.
Present the file-touch map, branch plan, and sequencing decisions to the user:
Dispatch plan:
- Agent 1 (parallel/fix-abort): [src/agents/abort.ts, abort.test.ts]
- Agent 2 (parallel/fix-batch): [src/batch/completion.ts, completion.test.ts]
- Agent 3 (parallel/fix-approval): [src/agents/approval.ts] — sequenced after Agent 1 (overlap on abort.ts)
Proceed? (y/n)
STOP. Wait for explicit user confirmation before dispatching agents. Do not proceed on silence or assume approval.
Each parallel agent MUST work on an isolated branch:
parallel/<task-name> (e.g., parallel/fix-abort-tests, parallel/fix-batch-completion).Use disk-mediated dispatch (see shared/dispatch-convention.md) to write dispatch files, then spawn agents:
# Write dispatch files to /tmp/crucible-dispatch-<session-id>/
# Each file contains the full prompt, constraints, and expected output format.
dispatch-001-abort-tests.md → Agent 1: Fix agent-tool-abort.test.ts
dispatch-002-batch-tests.md → Agent 2: Fix batch-completion-behavior.test.ts
dispatch-003-race-tests.md → Agent 3: Fix tool-approval-race-conditions.test.ts
# Spawn all three agents concurrently, each reading its dispatch file.
When agents return:
Merge agent branches one at a time to catch regressions early:
git merge parallel/<task-name>Good agent prompts are:
Fix the 3 failing tests in src/agents/agent-tool-abort.test.ts:
1. "should abort tool with partial output capture" - expects 'interrupted at' in message
2. "should handle mixed completed and aborted tools" - fast tool aborted instead of completed
3. "should properly track pendingToolCount" - expects 3 results but gets 0
These are timing/race condition issues. Your task:
1. Read the test file and understand what each test verifies
2. Identify root cause - timing issues or actual bugs?
3. Fix by:
- Replacing arbitrary timeouts with event-based waiting
- Fixing bugs in abort implementation if found
- Adjusting test expectations if testing changed behavior
Do NOT just increase timeouts - find the real issue.
Return: Summary of what you found and what you fixed.
❌ Too broad: "Fix all the tests" - agent gets lost ✅ Specific: "Fix agent-tool-abort.test.ts" - focused scope
❌ No context: "Fix the race condition" - agent doesn't know where ✅ Context: Paste the error messages and test names
❌ No constraints: Agent might refactor everything ✅ Constraints: "Do NOT change production code" or "Fix tests only"
❌ Vague output: "Fix it" - you don't know what changed ✅ Specific: "Return summary of root cause and changes"
Related failures: Fixing one might fix others - investigate together first Need full context: Understanding requires seeing entire system Exploratory debugging: You don't know what's broken yet Shared state: Agents would interfere (editing same files, using same resources) Overlapping files: Agents would modify overlapping files (sequence them instead)
Scenario: 6 test failures across 3 files after major refactoring
Failures:
Decision: Independent domains - abort logic separate from batch completion separate from race conditions
Dispatch (disk-mediated, one dispatch file per agent):
dispatch-001-abort.md → Agent 1: Fix agent-tool-abort.test.ts
dispatch-002-batch.md → Agent 2: Fix batch-completion-behavior.test.ts
dispatch-003-race.md → Agent 3: Fix tool-approval-race-conditions.test.ts
Results:
Integration: All fixes independent, no conflicts, full suite green
Time saved: 3 problems solved in parallel vs sequentially
After agents return:
Before completing this skill, confirm every mandatory checkpoint was executed:
If any checkbox is unchecked, STOP. Go back and execute the missed gate.
From debugging session (2025-10-03):