From citadel
Orchestrates parallel campaigns in coordinated waves: spawns 2-3 agents per wave in isolated worktrees, collects discoveries, shares context between waves. For 3+ independent work streams.
npx claudepluginhub sethgammon/citadel --plugin citadelThis skill uses the workspace's default tool permissions.
Use for 3+ independent work streams that can run simultaneously in isolated worktrees. Do NOT use for single-file scope, linear work, or when a marshal or skill suffices.
Orchestrates N parallel tasks: generates plans with cross-task file conflict analysis, deploys implementation swarms in waves using Agent Teams. Requires CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1.
Orchestrates multi-agent coding tasks via Claude DevFleet: plans projects into mission DAGs, dispatches parallel agents to isolated git worktrees, monitors progress, and retrieves structured reports.
Share bugs, ideas, or general feedback.
Use for 3+ independent work streams that can run simultaneously in isolated worktrees. Do NOT use for single-file scope, linear work, or when a marshal or skill suffices.
Use when: Running 2+ independent work streams in parallel — tasks with non-overlapping file scopes that can execute simultaneously.
Don't use when: Work must execute sequentially or accumulate findings across phases (use /archon), a single orchestrated session is enough (use /marshal), or the task is simple enough for a bare skill.
| Command | Behavior |
|---|---|
/fleet [direction] | Decompose direction into parallel streams, execute in waves |
/fleet [path-to-spec] | Read a spec file, decompose into streams |
/fleet continue | Resume from the last fleet session file |
/fleet (no args) | Health diagnostic → work queue → execute |
/fleet --quick [task1]; [task2] | Lightweight parallel mode for solo devs — 2+ tasks, single wave, auto-merge, no session file |
/fleet --speculative N [direction] | Try N different approaches to the same task in parallel — see Speculative Mode below |
.planning/campaigns/ for active campaigns.planning/coordination/claims/ for external claims.planning/momentum.json exists, run
node .citadel/scripts/momentum-read.cjs
and read the output. Use the active scopes and recurring decisions to inform
work queue prioritization. Skip silently if the file is absent or output is empty.Wave context restoration: Use the Claude Code Compaction API to restore fleet session context at the start of each session. Do NOT read
.claude/compact-state.json— that pattern is deprecated in favour of server-side compaction (available on Opus 4.6+). Fleet session files (.planning/fleet/session-{slug}.md) remain the source of truth for inter-wave discovery relay; compaction handles agent memory, not campaign state. If the Compaction API is unavailable, fall back to reading the fleet session file's Continuation State directly.
node .citadel/scripts/telemetry-log.cjs --event campaign-start --agent fleet --session {session-slug}
node .citadel/scripts/momentum-watch-start.cjs
The watcher runs in the background and re-synthesizes momentum.json within 500ms of any new discovery write. Safe to call if already running — only one watcher runs per project.
Produce a ranked list of campaigns with:
| Column | Purpose |
|---|---|
| Campaign name | What this stream does |
| Scope | Which directories it touches |
| Dependencies | What must complete before this can start |
| Wave | Which wave to assign it to |
| Agent type | What kind of agent to spawn |
Rules for work queue:
For each wave:
Prepare context for each agent:
.claude/agent-context/rules-summary.md.planning/map/index.json exists): run
node scripts/map-index.js --query "<agent's scope keywords>" --max-files 15
and inject the results as a === MAP SLICE === block. If the index does
not exist, skip silently.momentum.json fresh at each
wave boundary via node .citadel/scripts/momentum-read.cjs and inject as a
=== PRIOR SESSION CONTEXT === block. Re-reading (rather than reusing the
Step 1 snapshot) picks up discoveries written by parallel Fleet sessions in
other terminals. If the output is empty, skip silently.Log wave start:
node .citadel/scripts/telemetry-log.cjs --event wave-start --agent fleet --session {session-slug} --meta '{"wave":N,"agents":["name1","name2"]}'
Spawn agents with isolation: "worktree":
Agent(
prompt: "{full context + direction}",
isolation: "worktree",
mode: "bypassPermissions"
)
Collect results from all agents in the wave
Log per-agent results:
node .citadel/scripts/telemetry-log.cjs --event agent-complete --agent {agent-name} --session {session-slug} --status {success|partial|failed}
Compress discoveries for each agent:
node .citadel/scripts/compress-discovery.cjs on each output.planning/fleet/briefs/6b. Write persistent discovery records for each agent (cross-session memory):
node .citadel/scripts/discovery-write.cjs \
--session {session-slug} \
--agent {agent-name} \
--wave {wave-number} \
--status {success|partial|failed} \
--scope "{comma-separated-scope-dirs}" \
--handoff "{json-array-of-handoff-items}" \
--decisions "{json-array-of-decisions}" \
--files "{json-array-of-files-touched}" \
--failures "{json-array-of-failures}"
Log wave complete:
node .citadel/scripts/telemetry-log.cjs --event wave-complete --agent fleet --session {session-slug} --meta '{"wave":N,"status":"complete"}'
Merge branches from worktrees:
Update session file with wave results and accumulated discoveries
After all waves:
node scripts/run-with-timeout.js 300 <typecheck-cmd>wave_test_fail: true in the session file.completednode .citadel/scripts/telemetry-log.cjs --event campaign-complete --agent fleet --session {session-slug}
node .citadel/scripts/momentum-synthesize.cjs
5.5. Propagate knowledge — for each campaign that completed this session, run:
npm run propagate -- --campaign {slug}
Run once per completed campaign slug (not per wave). If multiple campaigns
completed, run for each slug. If npm run propagate is unavailable, note each
slug in the fleet session file under ## Pending Propagation.
6. Output final HANDOFF
Create at .planning/fleet/session-{slug}.md:
# Fleet Session: {name}
Status: active | needs-continue | completed
Started: {ISO timestamp}
Direction: {original direction}
## Work Queue
| # | Campaign | Scope | Deps | Status | Wave | Agent |
## Wave N Results
### Agent: {name}
**Status:** complete | partial | failed
**Built:** ... **Decisions:** ... **Files:** ...
## Shared Context (Discovery Relay)
- {cross-agent finding → what Wave N+1 should know}
## Continuation State
Next wave: N Blocked items: ... Auto-continue: true
Before assigning agents to a wave:
src/api/ and src/api/auth/ OVERLAP (parent/child)src/api/ and src/ui/ do NOT overlap (siblings)(read-only) scopes never conflictAlso check .planning/coordination/claims/ for external claims.
Effort hints for wave agents (use the effort parameter, not budget_tokens):
| Agent Type | Effort | ~Tokens |
|---|---|---|
| Fleet scouts (research, mapping, audit) | medium | ~100K each |
| Execution agents (build, refactor, implement) | high | ~250K each |
| Verify agents (typecheck, visual-verify, QA) | low | ~60K each |
The effort parameter is GA as of April 2026 and produces ~20–40% token reduction
compared to manually tuned budget_tokens values. Always prefer effort for new wave definitions.
Sub-agents can hang indefinitely on tool calls. Fleet must enforce execution time limits at the orchestrator level.
| Agent Type | Default Timeout | Override Key |
|---|---|---|
| Skill-level agents | 10 minutes | agentTimeouts.skill |
| Research scouts | 15 minutes | agentTimeouts.research |
| Build agents | 30 minutes | agentTimeouts.build |
Timeouts are configurable in harness.json:
{
"agentTimeouts": {
"skill": 600000,
"research": 900000,
"build": 1800000
}
}
On timeout: log agent-timeout event, extract partial HANDOFF if present, retry once with simplified prompt (Wave 1 critical scope only), skip otherwise. Never block the wave. Record Status: timed out in session file.
Read timeout values from harness.json → agentTimeouts.{skill|research|build} (defaults: 600000/900000/1800000ms).
Every agent spawned by Fleet must have a unique instance ID.
Format: fleet-{session-slug}-{wave}-{agent-index}
Example: fleet-auth-refactor-w1-a3 (wave 1, agent 3)
The instance ID is:
.fleet-instance-idBefore spawning: compare all agent scopes pairwise (directory scopes overlap any file inside them). On overlap: merge tasks, narrow scopes, or sequence. NEVER proceed with overlapping scopes.
After each wave: read .planning/coordination/claims/, verify each instance is still alive (worktree exists + HANDOFF present). Release orphaned claims, return uncompleted scope to next wave's queue.
.planning/fleet/ does not exist: Create the directory before writing the session file..planning/ does not exist: Create .planning/fleet/ before starting. If .planning/coordination/ is absent, skip scope claim registration./fleet --speculative N [direction]
Try N different approaches to the same task simultaneously. Each approach gets its own worktree and branch. When all finish, the user picks the winner; losers are archived (not deleted).
Before spawning, enumerate N distinct approaches. Each approach must:
Each agent gets:
speculative/{session-slug}/{strategy-label}branch and worktree_status: active in its campaign frontmatterSpawn with isolation: "worktree". Scope overlap rules do NOT apply between speculative
agents — they will all touch the same files intentionally.
After all agents complete, for each:
node scripts/run-with-timeout.js 300 <typecheck-cmd>Present a comparison table to the user:
| Strategy | Branch | Typecheck | Key Decision | Notable Tradeoffs |
|---|
If ALL N approaches fail typecheck: present the comparison table with all entries marked FAIL typecheck. Ask the user to pick the least-broken approach or abort. Do not proceed to Step 4 without a user decision.
When the user picks a winner:
worktree_status: merged, proceed with normal mergeworktree_status: archived. Do NOT delete branches.# Optional: tag losers for clarity
git tag archive/{loser-branch} {loser-branch}
Add ## Speculative Comparison to session file: direction, N strategies, comparison table (strategy/branch/status/typecheck/notes), winner, merge timestamp.
/fleet --quick [task1]; [task2]; [task3]
| Property | Standard Fleet | Quick Mode |
|---|---|---|
| Min streams | 3 | 2 |
| Min complexity | 4 | 3 |
| Waves | Multi-wave with discovery relay | Single wave only |
| Session file | Written to .planning/fleet/ | Skipped — results reported inline |
| Discovery briefs | Compressed to .planning/fleet/briefs/ | Skipped |
| Merge | Per-wave confirmation | Auto-merge if no conflicts |
| Scope claim | Written to coordination/ | Skipped |
--quick argument (semicolon-separated)isolation: "worktree"/do routes to --quick mode (not standard fleet) when:
Entry from /do confirmation prompt: user chose yes (1) or always (2). Preferences stored under consent.fleetSpawn in harness.json via readConsent/writeConsent.
Red actions require explicit confirmation regardless of trust level.
Read trust level from harness.json:
Update the session file, then output:
---HANDOFF---
- Fleet session: {name} — {waves completed} waves, {agents} agents total
- Built: {summary of all wave results}
- Discoveries: {key cross-agent findings}
- Merge conflicts: {count and resolution}
- Next: {remaining work if any}
- Reversibility: amber -- multi-wave merges, revert each wave's merge commit
---