From shipyard
Dispatches parallel agents or subagents for 2+ independent tasks without shared state, like multiple test failures or broken subsystems. Chooses team/subagent mode via env vars.
npx claudepluginhub lgbarn/shipyard --plugin shipyardThis skill uses the workspace's default tool permissions.
<!-- TOKEN BUDGET: 220 lines / ~660 tokens -->
Dispatches parallel agents for 2+ independent tasks without shared state or dependencies, like multiple failing test files or subsystems. Checks file overlaps before dispatch.
Prevents silent decimal mismatch bugs in EVM ERC-20 tokens via runtime decimals lookup, chain-aware caching, bridged-token handling, and normalization. For DeFi bots, dashboards using Python/Web3, TypeScript/ethers, Solidity.
Share bugs, ideas, or general feedback.
When Claude Code Agent Teams is enabled (SHIPYARD_TEAMS_ENABLED=true):
Shipyard multi-agent commands (build, plan, map, ship) use a standardized detect/ask/branch flow when dispatching agents:
SHIPYARD_TEAMS_ENABLED env var (set when CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1)AskUserQuestion: "Team mode (parallel teammates)" vs "Agent mode (subagents)"dispatch_mode (team or agent) and use it at every dispatch pointTeam mode lifecycle (per wave):
TeamCreate with name shipyard-{command}-phase-{N}-wave-{W}TaskCreate for each unit of work + TaskUpdate to pre-assign ownersTask(team_name, name, subagent_type) to spawn teammatesTaskList until all tasks reach terminal stateSendMessage(shutdown_request) + TeamDelete for cleanupKey rules:
When to choose each mode:
When you have multiple unrelated failures (different test files, different subsystems, different bugs), investigating them sequentially wastes time. Each investigation is independent and can happen in parallel.
Core principle: Dispatch one agent per independent problem domain. Let them work concurrently.
digraph when_to_use {
"Multiple failures?" [shape=diamond];
"Are they independent?" [shape=diamond];
"Single agent investigates all" [shape=box];
"One agent per problem domain" [shape=box];
"Can they work in parallel?" [shape=diamond];
"Sequential agents" [shape=box];
"Parallel dispatch" [shape=box];
"Multiple failures?" -> "Are they independent?" [label="yes"];
"Are they independent?" -> "Single agent investigates all" [label="no - related"];
"Are they independent?" -> "Can they work in parallel?" [label="yes"];
"Can they work in parallel?" -> "Parallel dispatch" [label="yes"];
"Can they work in parallel?" -> "Sequential agents" [label="no - shared state"];
}
Group failures by what's broken:
Each domain is independent -- fixing tool approval doesn't affect abort tests.
Each agent gets:
// Claude Code — all three dispatched in the same message
Task(subagent_type: "general-purpose", prompt: "Fix agent-tool-abort.test.ts failures...")
Task(subagent_type: "general-purpose", prompt: "Fix batch-completion-behavior.test.ts failures...")
Task(subagent_type: "general-purpose", prompt: "Fix tool-approval-race-conditions.test.ts failures...")
// All three run concurrently
When agents return:
Good agent prompts are:
Fix the 3 failing tests in src/agents/agent-tool-abort.test.ts:
1. "should abort tool with partial output capture" - expects 'interrupted at' in message
2. "should handle mixed completed and aborted tools" - fast tool aborted instead of completed
3. "should properly track pendingToolCount" - expects 3 results but gets 0
These are timing/race condition issues. Your task:
1. Read the test file and understand what each test verifies
2. Identify root cause - timing issues or actual bugs?
3. Fix by:
- Replacing arbitrary timeouts with event-based waiting
- Fixing bugs in abort implementation if found
- Adjusting test expectations if testing changed behavior
Do NOT just increase timeouts - find the real issue.
Return: Summary of what you found and what you fixed.
Too broad: "Fix all the tests" -- agent gets lost Specific: "Fix agent-tool-abort.test.ts" -- focused scope
No context: "Fix the race condition" -- agent doesn't know where Context: Paste the error messages and test names
No constraints: Agent might refactor everything Constraints: "Do NOT change production code" or "Fix tests only"
Vague output: "Fix it" -- you don't know what changed Specific: "Return summary of root cause and changes"
Scenario: 6 test failures across 3 files after major refactoring.
Analysis: Abort logic, batch completion, and race condition handling are separate subsystems with no shared code paths. Fixing one cannot fix or break another.
Agent 1 -> Fix agent-tool-abort.test.ts (3 timing failures)
Agent 2 -> Fix batch-completion-behavior.test.ts (2 event structure failures)
Agent 3 -> Fix tool-approval-race-conditions.test.ts (1 async failure)
Result: All three agents return independently. Fixes don't conflict. Full suite green after integration.
Scenario: Auth module refactored. Login tests fail, and downstream API tests also fail because they depend on the auth module.
Agent 1 -> Fix login.test.ts failures
Agent 2 -> Fix api-protected-routes.test.ts failures
Why this fails: Agent 2 cannot succeed until Agent 1 fixes the auth module. The API route failures are a symptom of the auth bug, not independent problems. Agent 2 will either duplicate Agent 1's fix (conflict) or fail entirely.
Correct approach: Fix auth first (single agent), then assess whether API route failures remain.
Scenario: Two agents both need to modify src/config.ts to fix their respective issues.
Agent 1 -> Fix database connection pooling (needs to change config.ts)
Agent 2 -> Fix cache TTL settings (needs to change config.ts)
Why this fails: Both agents will edit the same file. Their changes will conflict during integration. Run these sequentially instead.
After agents return: