From agentops
Dispatches isolated subagents for parallel task execution with file ownership, dependencies, waves, and evidence. For multi-agent runtimes handling concurrent file edits.
npx claudepluginhub boshu2/agentops --plugin agentopsThis skill uses the workspace's default tool permissions.
Spawn isolated agents to execute tasks in parallel. Fresh context per agent (Ralph Wiggum pattern).
references/backend-background-tasks.mdreferences/backend-claude-teams.mdreferences/backend-codex-subagents.mdreferences/backend-inline.mdreferences/claude-code-latest-features.mdreferences/cold-start-contexts.mdreferences/conflict-recovery.mdreferences/local-mode.mdreferences/pre-spawn-friction-gates.mdreferences/ralph-loop-contract.mdreferences/validation-contract.mdreferences/worker-pitfalls.mdscripts/ol-ratchet.shscripts/ol-wave-loader.shscripts/validate.shCreates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Spawn isolated agents to execute tasks in parallel. Fresh context per agent (Ralph Wiggum pattern).
Integration modes:
/swarm/crank creates tasks from beads, invokes /swarm for each waveRequires multi-agent runtime. Swarm needs a runtime that can spawn parallel subagents. If unavailable, work must be done sequentially in the current session.
Mayor (this session)
|
+-> Plan: TaskCreate with dependencies
|
+-> Identify wave: tasks with no blockers
|
+-> Select spawn backend (gc if available; runtime-native: Claude teams in Claude runtime, Codex sub-agents in Codex runtime; fallback tasks if unavailable)
|
+-> Assign: TaskUpdate(taskId, owner="worker-<id>", status="in_progress")
|
+-> Spawn workers via selected backend
| Workers receive pre-assigned task, execute atomically
|
+-> Wait for completion (wait() | SendMessage | TaskOutput)
|
+-> Validate: Review changes when complete
|
+-> Cleanup backend resources (close_agent | TeamDelete | none)
|
+-> Repeat: New team + new plan if more work needed
Given /swarm:
Use runtime capability detection, not hardcoded tool names. Swarm requires:
See skills/shared/SKILL.md for the capability contract.
After detecting your backend, read the matching reference for concrete spawn/wait/message/cleanup examples:
skills/shared/references/claude-code-latest-features.mdreferences/claude-code-latest-features.mdreferences/backend-claude-teams.mdreferences/backend-codex-subagents.mdreferences/backend-background-tasks.mdreferences/backend-inline.mdSee also references/local-mode.md for swarm-specific execution details (worktrees, validation, git commit policy, wave repeat).
Before spawning workers via Claude teams or Codex sub-agents, check if gc is available:
if command -v gc &>/dev/null && gc status --json 2>/dev/null | jq -e '.controller.state == "running"' >/dev/null 2>&1; then
SWARM_BACKEND="gc"
else
SWARM_BACKEND="native" # fallback to Claude teams / Codex sub-agents
fi
When SWARM_BACKEND="gc":
gc session nudge <worker-alias> "<task prompt>" instead of spawn_agent()gc session peek <worker-alias> --lines 50bd for issue tracking — no change needed.agents/swarm/results/ — no change neededscale_check = "bd ready --count")Use TaskList to see current tasks. If none, create them:
TaskCreate(subject="Implement feature X", description="Full details...",
metadata={"issue_type": "feature", "files": ["src/feature_x.py", "tests/test_feature_x.py"], "validation": {...}})
TaskUpdate(taskId="2", addBlockedBy=["1"]) # Add dependencies after creation
Every TaskCreate must include metadata.issue_type plus a metadata.files array. issue_type drives active constraint applicability and validation policy; files enable mechanical conflict detection before spawning a wave.
This is how the prevention ratchet applies shift-left mechanically: active compiled findings use issue type plus changed files to decide whether a task should be blocked, warned, or left alone.
feature, bug, task, docs, chore, ci.metadata.issue_type on TaskUpdate / TaskCompleted payloads so task-validation can apply active constraints without guessing.references/local-mode.md worker prompt template).metadata.files array as the FILE MANIFEST section. Workers grep for existing function signatures before writing new code to avoid duplication.{
"issue_type": "feature",
"files": ["cli/cmd/ao/goals.go", "cli/cmd/ao/goals_test.go"],
"validation": {
"tests": "go test ./cli/cmd/ao/...",
"files_exist": ["cli/cmd/ao/goals.go"]
}
}
if command -v ao &>/dev/null; then
ao context assemble --task='<swarm objective or wave description>'
fi
This produces a 5-section briefing (GOALS, HISTORY, INTEL, TASK, PROTOCOL) at .agents/rpi/briefing-current.md with secrets redacted. Include the briefing path in each worker's TaskCreate description so workers start with full project context.
Output schema size guard: When 5+ workers in a wave share the same output schema (e.g., verdict.json), cache it to .agents/council/output-schema.json and reference by path instead of inlining ~500 tokens per worker. For ≤4 workers, inline is fine. See council skill's caching guidance reference for details.
Worker prompt signpost:
Knowledge artifacts are in .agents/. See .agents/AGENTS.md for navigation. Use \ao lookup --query "topic"` for learnings.`.agents/ file access in sandbox. The lead should search .agents/learnings/ for relevant material and inline the top 3 results directly in the worker prompt body.Skip this step if all tasks already have populated metadata.files arrays.
If any task is missing its file manifest, auto-generate it before Step 2:
Spawn haiku Explore agents (one per task missing manifests) to identify files:
Agent(subagent_type="Explore", model="haiku",
prompt="Given this task: '<task subject + description>', identify all files
that will need to be created or modified. Return a JSON array of file paths.")
Inject manifests back into tasks:
TaskUpdate(taskId=task.id, metadata={"files": [explored_files]})
Once all tasks have manifests, proceed to Step 2 where the Pre-Spawn Conflict Check enforces file ownership.
When tasks come from bd and scripts/bd-cluster.sh exists, run scripts/bd-cluster.sh --json 2>/dev/null || true before Step 2. Summarize any clusters as consolidation hints only; never run --apply here, and keep Step 2's file-manifest and dependency gates authoritative.
Pre-Spawn Friction Gates: Before spawning workers, execute all 6 friction gates (base sync, file manifest, dependency graph, misalignment breaker, wave cap, base-SHA ancestry). See references/pre-spawn-friction-gates.md.
Find tasks that are:
pendingThese can run in parallel.
Before spawning a wave, scan all worker file manifests for overlapping files:
wave_tasks = [tasks with status=pending and no blockers]
all_files = {}
for task in wave_tasks:
for f in task.metadata.files:
if f in all_files:
CONFLICT: f is claimed by both all_files[f] and task.id
all_files[f] = task.id
On conflict detection:
--worktrees) so each operates on a separate branch.Do not spawn workers with overlapping file manifests into the same shared-worktree wave. This is the primary cause of build breaks and merge conflicts in parallel execution.
Display ownership table before spawning:
File Ownership Map (Wave N):
┌─────────────────────────────┬──────────┬──────────┐
│ File │ Owner │ Conflict │
├─────────────────────────────┼──────────┼──────────┤
│ src/auth/middleware.go │ task-1 │ │
│ src/auth/middleware_test.go │ task-1 │ │
│ src/api/routes.go │ task-2 │ │
│ src/config/settings.go │ task-1,3 │ YES │
└─────────────────────────────┴──────────┴──────────┘
Conflicts: 1 (resolved: serialized task-3 into sub-wave 2)
When workers create new test files, validate naming against loaded standards:
<source>_test.go or <source>_extra_test.gocov*_test.go or arbitrary prefixestestutil_test.go or >5 existing test files, force serial execution within that package.When executing wave 2+ (not the first wave), verify workers branch from the latest commit — not a stale SHA from before the prior wave's changes were committed.
# PSEUDO-CODE
# Capture current HEAD after prior wave's commit
CURRENT_SHA=$(git rev-parse HEAD)
# If using worktrees, verify they're up to date
if [[ -n "$WORKTREE_PATH" ]]; then
(cd "$WORKTREE_PATH" && git pull --rebase origin "$(git branch --show-current)" 2>/dev/null || true)
fi
Cross-reference prior wave diff against current wave file manifests:
# PSEUDO-CODE
# Files changed in prior wave
PRIOR_WAVE_FILES=$(git diff --name-only "${WAVE_START_SHA}..HEAD")
# Check for overlap with current wave manifests
for task in $WAVE_TASKS; do
TASK_FILES=$(echo "$task" | jq -r '.metadata.files[]')
OVERLAP=$(comm -12 <(echo "$PRIOR_WAVE_FILES" | sort) <(echo "$TASK_FILES" | sort))
if [[ -n "$OVERLAP" ]]; then
echo "WARNING: Task $task touches files modified in prior wave: $OVERLAP"
echo "Workers MUST read the latest version (post-prior-wave commit)"
fi
done
Why: Without base-SHA refresh, wave 2+ workers may read stale file versions from before wave 1 changes were committed. This causes workers to overwrite prior wave edits or implement against outdated code. See crank Step 5.7 (wave checkpoint) for the SHA tracking pattern.
For detailed local mode execution (team creation, worker spawning, race condition prevention, git commit policy, validation contract, cleanup, and repeat logic), read skills/swarm/references/local-mode.md.
Platform pitfalls: Include relevant pitfalls from
references/worker-pitfalls.mdin worker prompts for the target language/platform. For example, inject the Bash section for shell script tasks, the Go section for Go tasks, etc. This prevents common worker failures from known platform gotchas.
SWARM_BACKEND="gc")When gc is the selected backend, dispatch and monitor workers through gc sessions instead of Claude teams or Codex sub-agents:
# Dispatch a task to a gc-managed worker
gc session nudge <worker-alias> "Implement task #<id>: <subject>. Files: <manifest>. Write results to .agents/swarm/results/<id>.json"
# Monitor worker progress
gc session peek <worker-alias> --lines 50
# Check all worker statuses
gc status --json | jq '.sessions[] | {alias, state, last_activity}'
gc dispatch follows the same orchestration contract as native backends:
.agents/swarm/results/<id>.json.agents/swarm/scope-escapes.jsonl)gc-specific behaviors:
gc session peek for progress checks instead of SendMessage / send_inputgc session nudge can re-prompt itMayor: "Let's build a user auth system"
1. /plan -> Creates tasks:
#1 [pending] Create User model
#2 [pending] Add password hashing (blockedBy: #1)
#3 [pending] Create login endpoint (blockedBy: #1)
#4 [pending] Add JWT tokens (blockedBy: #3)
#5 [pending] Write tests (blockedBy: #2, #3, #4)
2. /swarm -> Spawns agent for #1 (only unblocked task)
3. Agent #1 completes -> #1 now completed
-> #2 and #3 become unblocked
4. /swarm -> Spawns agents for #2 and #3 in parallel
5. Continue until #5 completes
6. /vibe -> Validate everything
When a worker discovers work outside their assigned scope, they MUST NOT modify files outside their file manifest. Instead, append to .agents/swarm/scope-escapes.jsonl:
{"worker": "<worker-id>", "finding": "<description>", "suggested_files": ["path/to/file"], "timestamp": "<ISO8601>"}
The lead reviews scope escapes after each wave and creates follow-up tasks as needed.
.agents/swarm/results/<id>.json, orchestrator reads files (NOT Task returns or SendMessage content)send_input (Codex) or SendMessage (Claude) for coordination onlyThis ties into the full workflow:
/research -> Understand the problem
/plan -> Decompose into beads issues
/crank -> Autonomous epic loop
+-- /swarm -> Execute each wave in parallel
/vibe -> Validate results
/post-mortem -> Extract learnings
Direct use (no beads):
TaskCreate -> Define tasks
/swarm -> Execute in parallel
The knowledge flywheel captures learnings from each agent.
# List all tasks
TaskList()
# Mark task complete after notification
TaskUpdate(taskId="1", status="completed")
# Add dependency between tasks
TaskUpdate(taskId="2", addBlockedBy=["1"])
| Parameter | Description | Default |
|---|---|---|
--max-workers=N | Max concurrent workers | 5 |
--from-wave <json-file> | Load wave from OL hero hunt output (see OL Wave Integration) | - |
--per-task-commits | Commit per task instead of per wave (for attribution/audit) | Off (per-wave) |
| Scenario | Use |
|---|---|
| Multiple independent tasks | /swarm (parallel) |
| Sequential dependencies | /swarm with blockedBy |
| Mix of both | /swarm spawns waves, each wave parallel |
Follows the Ralph Wiggum Pattern: fresh context per execution unit.
Ralph alignment source: ../shared/references/ralph-loop-contract.md.
When /crank invokes /swarm: Crank bridges beads to TaskList, swarm executes with fresh-context agents, crank syncs results back.
| You Want | Use | Why |
|---|---|---|
| Fresh-context parallel execution | /swarm | Each spawned agent is a clean slate |
| Autonomous epic loop | /crank | Loops waves via swarm until epic closes |
| Just swarm, no beads | /swarm directly | TaskList only, skip beads |
| RPI progress gates | /ratchet | Tracks progress; does not execute work |
When /swarm --from-wave <json-file> is invoked, the swarm reads wave data from an OL hero hunt output file and executes it with completion backflow to OL.
# --from-wave requires ol CLI on PATH
which ol >/dev/null 2>&1 || {
echo "Error: ol CLI required for --from-wave. Install ol or use swarm without wave integration."
exit 1
}
If ol is not on PATH, exit immediately with the error above. Do not fall back to normal swarm mode.
The --from-wave JSON file contains ol hero hunt output:
{
"wave": [
{"id": "ol-527.1", "title": "Add auth middleware", "spec_path": "quests/ol-527/specs/ol-527.1.md", "priority": 1},
{"id": "ol-527.2", "title": "Fix rate limiting", "spec_path": "quests/ol-527/specs/ol-527.2.md", "priority": 2}
],
"blocked": [
{"id": "ol-527.3", "title": "Integration tests", "blocked_by": ["ol-527.1", "ol-527.2"]}
],
"completed": [
{"id": "ol-527.0", "title": "Project setup"}
]
}
Parse the JSON file and extract the wave array.
Create TaskList tasks from wave entries (one TaskCreate per entry):
for each entry in wave:
TaskCreate(
subject="[{entry.id}] {entry.title}",
description="OL bead {entry.id}\nSpec: {entry.spec_path}\nPriority: {entry.priority}\n\nRead the spec file at {entry.spec_path} for full requirements.",
metadata={
"issue_type": entry.issue_type,
"ol_bead_id": entry.id,
"ol_spec_path": entry.spec_path,
"ol_priority": entry.priority
}
)
Execute swarm normally on those tasks (Step 2 onward from main execution flow). Tasks are ordered by priority (lower number = higher priority).
Completion backflow: After each worker completes a bead task AND passes validation, the team lead runs the OL ratchet command to report completion back to OL:
# Extract quest ID from bead ID (e.g., ol-527.1 -> ol-527)
QUEST_ID=$(echo "$BEAD_ID" | sed 's/\.[^.]*$//')
ol hero ratchet "$BEAD_ID" --quest "$QUEST_ID"
Ratchet result handling:
| Exit Code | Meaning | Action |
|---|---|---|
| 0 | Bead complete in OL | Mark task completed, log success |
| 1 | Ratchet validation failed | Mark task as failed, log the validation error from stderr |
/swarm --from-wave /tmp/wave-ol-527.json
# Reads wave JSON -> creates 2 tasks from wave entries
# Spawns workers for ol-527.1 and ol-527.2
# On completion of ol-527.1:
# ol hero ratchet ol-527.1 --quest ol-527 -> exit 0 -> bead complete
# On completion of ol-527.2:
# ol hero ratchet ol-527.2 --quest ol-527 -> exit 0 -> bead complete
# Wave done: 2/2 beads ratcheted in OL
skills/swarm/references/local-mode.mdskills/swarm/references/validation-contract.mdUser says: /swarm
What happens:
Result: Multi-wave execution with fresh-context workers per wave, zero race conditions.
User says: Create three tasks for API refactor, then /swarm
What happens:
/swarm without beads integrationResult: Parallel execution of independent tasks using TaskList only.
Default behavior: Auto-detect and prefer runtime-native isolation first.
In Claude runtime, first verify teammate profiles with claude agents and use agent definitions with isolation: worktree for write-heavy parallel waves. If native isolation is unavailable, use manual git worktree fallback below.
| Backend | Isolation Mechanism | How It Works |
|---|---|---|
Claude teams (Task with team_name) | isolation: worktree in agent definition | Runtime creates an isolated git worktree per teammate; changes are invisible to other agents and the main tree until merged |
Background tasks (Task with run_in_background) | isolation: worktree in agent definition | Same worktree isolation as teams; each background agent gets its own worktree |
gc pool (gc session nudge) | gc-managed sessions | Each gc worker runs in its own session; isolation is managed by gc pool lifecycle and bd issue ownership |
| Inline (no spawn) | None | Operates directly on the main working tree; no isolation possible |
Sparse checkout for large repos: Set worktree.sparsePaths in project settings to limit worktree checkouts to relevant directories. This reduces clone time and disk usage for monorepos where workers only need a subset of the tree.
Use the effort command to right-size model reasoning per worker role:
| Worker Role | Recommended Effort | Rationale |
|---|---|---|
| Research/exploration | low | Fast, broad scanning — depth not needed |
| Implementation (code) | high | Deep reasoning for correct implementation |
| Docs/chore | low | Fast execution for simple tasks |
Key diagnostic: When isolation: worktree is specified but worker changes appear in the main working tree (no separate worktree path in the Task result), isolation did NOT engage. This is a silent failure — the runtime accepted the parameter but did not create a worktree.
After spawning workers with isolation: worktree, the lead MUST verify isolation engaged:
worktreePath field. If present, isolation is active.worktreePath is absent but isolation: worktree was specified:
git worktree creation (see below) or switch to serial inline execution.When to use worktrees: Activate worktree isolation when:
git diff --name-only)Evidence: 4 parallel agents in shared worktree produced 1 build break and 1 algorithm duplication (see .agents/evolve/dispatch-comparison.md). Worktree isolation prevents collisions by construction.
# Heuristic: multi-epic = worktrees needed
# Single epic with independent files = shared worktree OK
# Check if tasks span multiple epics
# e.g., task subjects contain different epic IDs (ol-527, ol-531, ...)
# If yes: use worktrees
# If no: proceed with default shared worktree
Before spawning workers, create an isolated worktree per epic:
# For each epic ID in the wave:
git worktree add /tmp/swarm-<epic-id> -b swarm/<epic-id>
Example for 3 epics:
git worktree add /tmp/swarm-ol-527 -b swarm/ol-527
git worktree add /tmp/swarm-ol-531 -b swarm/ol-531
git worktree add /tmp/swarm-ol-535 -b swarm/ol-535
Each worktree starts at HEAD of current branch. The worker branch (swarm/<epic-id>) is ephemeral — deleted after merge.
Pass the worktree path as the working directory in each worker prompt:
WORKING DIRECTORY: /tmp/swarm-<epic-id>
All file reads, writes, and edits MUST use paths rooted at /tmp/swarm-<epic-id>.
Do NOT operate on /path/to/main/repo directly.
Workers run in isolation — changes in one worktree cannot conflict with another.
Result file path: Workers still write results to the main repo's .agents/swarm/results/:
# Worker writes to main repo result path (not the worktree)
RESULT_DIR=/path/to/main/repo/.agents/swarm/results
The orchestrator path for .agents/swarm/results/ is always the main repo, not the worktree.
After a worker's task passes validation, merge the worktree branch back to main:
# From the main repo (not worktree)
git merge --no-ff swarm/<epic-id> -m "chore: merge swarm/<epic-id> (epic <epic-id>)"
Merge order: respect task dependencies. If epic B blocked by epic A, merge A before B.
Base-SHA ancestry check before merge-back: Worktree branches rooted off non-main commits pull unintended branch ancestry during git merge --no-ff, causing extra files to land. Before merging:
git cherry-pick <sha> over git merge --no-ff. Cherry-pick applies only the commit's diff and avoids pulling unintended ancestry.git rebase main swarm/<epic-id> before git merge --no-ff to re-root the branch onto current main HEAD and eliminate stale ancestry.Merge Arbiter Protocol:
Replace manual conflict resolution with a structured sequential rebase:
# For each branch in merge order:
git rebase main swarm/<epic-id>
Merge Status:
┌────────────────────┬──────────┬────────────┬───────────┐
│ Branch │ Status │ Conflicts │ Fix-ups │
├────────────────────┼──────────┼────────────┼───────────┤
│ swarm/task-1 │ MERGED │ 0 │ 0 │
│ swarm/task-2 │ MERGED │ 1 (auto) │ 0 │
│ swarm/task-3 │ MERGED │ 1 (fixup) │ 1 │
└────────────────────┴──────────┴────────────┴───────────┘
Workers must not merge — lead-only commit policy still applies.
# After successful merge:
git worktree remove /tmp/swarm-<epic-id>
git branch -d swarm/<epic-id>
Run cleanup even on partial failures (same reaper pattern as team cleanup).
1. Detect: does this wave need worktrees? (multi-epic or file overlap)
2. For each epic:
a. git worktree add /tmp/swarm-<epic-id> -b swarm/<epic-id>
3. Spawn workers with worktree path injected into prompt
4. Wait for completion (same as shared mode)
5. Validate each worker's changes (run tests inside worktree)
6. For each passing epic:
a. git merge --no-ff swarm/<epic-id>
b. git worktree remove /tmp/swarm-<epic-id>
c. git branch -d swarm/<epic-id>
7. Commit all merged changes (team lead, sole committer)
| Parameter | Description | Default |
|---|---|---|
--worktrees | Force worktree isolation for this wave | Off (auto-detect) |
--no-worktrees | Force shared worktree even for multi-epic | Off |
Cause: isolation: worktree was specified but the Task result has no worktreePath — worker changes land in the main tree.
Solution: Verify agent definitions include isolation: worktree. If the runtime does not support declarative isolation, fall back to manual git worktree add (see Worktree Isolation section). For overlapping-file waves, abort and switch to serial execution.
Cause: Multiple workers editing the same file in parallel.
Solution: Use worktree isolation (--worktrees) for multi-epic dispatch. For single-epic waves, use wave decomposition to group workers by file scope. Homogeneous waves (all Go, all docs) prevent conflicts.
Cause: Stale team from prior session not cleaned up.
Solution: Run rm -rf ~/.claude/teams/<team-name> then retry.
Cause: codex CLI not installed or API key not configured.
Solution: Run which codex to verify installation. Check ~/.codex/config.toml for API credentials.
Cause: Worker task too large or blocked on external dependency. Solution: Break tasks into smaller units. Add timeout metadata to worker tasks.
Cause: gc controller is running but worker sessions are idle or not accepting nudges.
Solution: Run gc status --json to check session states. Use gc session peek <alias> --lines 50 to inspect last activity. If a session is stuck, restart it via gc pool commands. Verify scale_check = "bd ready --count" returns pending work.
Cause: Backend selection failed or spawning API unavailable.
Solution: Check which spawn backend was selected (look for "Using: " message). Verify Codex CLI (which codex) or native team API availability.