From pds
Launches multi-agent Agentic SDLC workflows for parallel task decomposition, dispatch to tiered agents (lite/med/heavy), and validation. Use for complex tasks with parallel subtasks.
npx claudepluginhub rmzi/portable-dev-system --plugin pdsThis skill uses the workspace's default tool permissions.
Six-phase workflow for decomposing, dispatching, and validating parallel work across agent teams. Each phase shows the concrete tool calls needed to execute it.
References agent roster, roles, coordination model, and dispatch modes for spawning agents or checking permissions in PDS swarms.
Orchestrates N parallel tasks: generates plans with cross-task file conflict analysis, deploys implementation swarms in waves using Agent Teams. Requires CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1.
Sets up multi-agent teams for complex projects with file-based planning, per-agent directories, and teammate spawning. Triggers on team/swarm/start-project requests.
Share bugs, ideas, or general feedback.
Six-phase workflow for decomposing, dispatching, and validating parallel work across agent teams. Each phase shows the concrete tool calls needed to execute it.
If you are not the orchestrator, spawn one to execute this workflow. Agents terminate when they return output, so the parent must handle the human approval gate — not the orchestrator.
Two-phase delegation pattern:
# Phase 1: Orchestrator runs grill, produces plan, returns it
plan = Agent(subagent_type="pds:orchestrator", name="orchestrator-plan",
prompt="Run /pds:grill for: <task description>. Tier: <tier>.
Produce a plan with acceptance criteria. Return the plan — do NOT
proceed to decomposition. Write swarm state files (.claude/swarm/phase,
.claude/swarm/tier) before returning.")
# Parent relays plan to human, gets approval/override, then:
# Phase 2+: New orchestrator executes the approved plan through all remaining phases
Agent(subagent_type="pds:orchestrator", name="orchestrator",
prompt="Plan is approved by human. Execute /pds:swarm Phases 2-6 for: <context>.
<paste approved plan + acceptance criteria here>.
Proceed through all phases without stopping for approval.")
If no tier is specified, the Phase 1 orchestrator MUST run /pds:grill first to determine the tier. Grill is mandatory before any swarm — it validates requirements AND recommends a tier.
The orchestrator has TeamCreate, TaskCreate, Task(worker), SendMessage, and other coordination tools. The main conversation does not — delegation is required.
Everything below is written for the orchestrator.
Three tiers control model selection and specialist inclusion. The tier is set during Phase 1 (via grill or user override) and stored in .claude/swarm/tier. Med matches the current agent defaults — existing swarms are implicitly med.
| Agent | Lite | Med | Heavy |
|---|---|---|---|
| orchestrator | sonnet | opus | opus |
| researcher | (skip) | sonnet | opus |
| worker | haiku | sonnet | sonnet |
| validator | haiku | sonnet | sonnet |
| reviewer | (skip) | sonnet | opus |
| documenter | (skip) | sonnet | sonnet |
| scout | haiku | haiku | sonnet |
| auditor | (skip) | (skip) | sonnet |
| shepherd | (skip) | opus | opus |
advisor_consult directly for substance questions. Cheapest effective configuration.User can force a tier: /pds:swarm lite, /pds:swarm med, /pds:swarm heavy. Without an argument, tier is auto-selected via /pds:grill step 10. The human confirms or overrides the tier during Phase 1 approval.
The orchestrator tracks the current phase in .claude/swarm/phase. Transitions are forward-only:
plan → decompose → dispatch → validate → consolidate → knowledge
Initialize at swarm start:
mkdir -p .claude/swarm && echo "plan" > .claude/swarm/phase && echo "<tier>" > .claude/swarm/tier
Advance by writing the next phase name (echo "X" > .claude/swarm/phase) as the first step of each phase. Write a checkpoint at each transition — see orchestrator.md for the checkpoint protocol. The PR gate and teardown gate enforce phase state (defense-in-depth alongside artifact checks). If the phase file is absent, gates fall through to artifact-only checks.
Parallel tracks. For Med/Heavy tiers, launch both tracks concurrently at Phase 1 init:
/pds:grill to validate requirements and get a tier recommendation. If a tier override was provided, grill still runs for requirement validation. Load .claude/instincts.md (if it exists) and include high-confidence patterns in the grill context.Task(researcher, model="<tier-model>",
prompt="Analyze the codebase for X. Query .claude/instincts.md for relevant prior patterns.
Send findings via SendMessage.")
Tier models — med: omit model (sonnet default). Heavy: model="opus".
If the researcher calls ExitPlanMode, respond with plan_approval_response to approve or reject its plan.Both tracks must complete before Phase 2 begins. Lite tier: skip researcher; orchestrator self-researches after grill.
Write the tier to state: echo "<tier>" > .claude/swarm/tier
Synthesize grill output + researcher findings into acceptance criteria (see Phase 2 for format).
Find or create the GitHub ticket. Run /pds:ticket to search for an existing issue matching this task; create one if none, resolve ambiguity via AskUserQuestion if multiple match. Write the issue number to .claude/swarm/ticket. Ticket body contains plan + acceptance criteria checklist. If gh is unavailable or there's no GitHub remote, warn and proceed without a ticket — note it in scout-report.md at Phase 6. See /pds:ticket for the full protocol.
Spawn shepherd (med/heavy only, after grill). After the grill completes and you know the tier is med or heavy, spawn the shepherd agent to walk the ticket alongside workers through Phases 2-6:
Task(shepherd, team_name=team_name, name="shepherd",
model="opus",
prompt="Walk this ticket. Reference corpus: docs/whitepaper.md, docs/philosophy.md, docs/ethos.md, CLAUDE.md, skills/swarm/SKILL.md, .claude/shepherd-journal.md. Tier: <tier>. Plan context: see .claude/swarm/context.md once Phase 2 writes it. Respond to substance questions via SendMessage. Flag observed drift proactively. Write to journal continuously and on teardown.")
Do NOT spawn shepherd at lite tier — keeps lite cheap. Workers at lite tier invoke advisor_consult directly for substance questions.
The shepherd reads its reference corpus on spawn, notes its arrival to the orchestrator via SendMessage, and enters steady state. It is single-instance per swarm — if one is already active, skip this step.
If spawned as Phase 1 only (plan prompt): Return the plan + criteria + tier. The parent handles human approval and spawns a Phase 2+ orchestrator. If spawned with pre-approval (full execution prompt): Proceed directly to Phase 2.
Split along architecture boundaries. If CLAUDE.md defines Agent Zones (a table mapping zones to paths and merge order), use them to guide decomposition — one task per zone, foundation-first merge order.
Use TaskCreate for each work unit. Acceptance criteria in the description field must use checklist format so they can be mechanically verified:
TaskCreate(
subject: "Implement auth module",
description: "- [ ] JWT login endpoint at POST /auth/login\n- [ ] Token validation middleware on protected routes\n- [ ] Tests pass for both",
activeForm: "Implementing auth module"
)
DAG validation. After setting dependencies with TaskUpdate(addBlockedBy/addBlocks):
blocks relationship (may be missing connections)When zones cross a boundary (e.g., backend <-> frontend), write a contract to .claude/swarm/contracts.md defining the interface before dispatching.
Write decomposition plan to .claude/swarm/plan.md.
Write context file. Before dispatching workers, write .claude/swarm/context.md containing:
This file bridges the context gap — workers read it on init to recover the orchestrator's reasoning without requiring fork-level context inheritance. Keep it concise (under 200 lines) and factual. The shepherd (if spawned) also reads this file to pick up the current swarm's plan.
Update the ticket (if a ticket was newly created in Phase 1). Write the finalized acceptance-criteria checklist to the ticket body. For a reused ticket that already contains criteria, skip this step — don't duplicate. See /pds:ticket for the gh issue edit pattern.
Create the team:
TeamCreate(team_name="project-name", description="Working on feature X")
Read tier from .claude/swarm/tier. Spawn workers with tier-appropriate model overrides:
# Lite — haiku workers
Task(worker, team_name="project-name", name="worker-auth",
model="haiku",
prompt="Implement auth module per task description. Run /pds:verify before reporting done.")
# Med — no model override needed (sonnet is the agent default)
Task(worker, team_name="project-name", name="worker-auth",
prompt="Implement auth module per task description. Run /pds:verify before reporting done.")
# Heavy — workers stay sonnet (no override), but use more workers for parallelism
Use Task(validator) for validation tasks, Task(researcher) for research, etc. The typed syntax restricts which agent definitions can fulfill the spawn. Always pass the tier-appropriate model override — see the Swarm Tiers table above.
Worktree isolation: If workers will edit overlapping files, spawn them with isolation: "worktree" so each gets an isolated copy of the repo. If workers touch non-overlapping files (different modules/skills), they can share the current worktree — but document the boundary in each worker's prompt to prevent collisions.
Assign initial tasks to workers:
TaskUpdate(taskId="1", owner="worker-auth", status="in_progress")
Workers implement autonomously using a pull model:
TaskGet for requirements and acceptance criteriaSendMessage for cross-agent coordination or to report blockersadvisor_consult (lite or shepherd down)/pds:verify before declaring doneTaskUpdate(taskId="1", status="completed")TaskList and self-claim next unblocked task (prefer lowest ID)TaskCreate if they discover additional workMonitor and backpressure. Check progress via TaskList. On TeammateIdle events:
TaskGet first — if the agent awaits a blocked dependency, no action neededSendMessage to re-activateSendMessage. On second timeout, use TaskStop and reassign the taskShepherd is idle-resilient. The shepherd spawned in Phase 1 continues to respond to SendMessage during Phase 3 and later. If the shepherd goes idle with no traffic, that's normal — proactive flagging is evidence-based, not scheduled. The shepherd will log observations as they accrue.
Comment on ticket (if one exists). Post a short comment: "Phase: dispatch. Tier: . Workers: ." See /pds:ticket.
Hook note: PDS hooks log WorktreeCreate and WorktreeRemove events as workers start and finish. These appear in the audit log for lifecycle traceability.
Workers run /pds:verify (self-check) before reporting task complete.
Pipeline validation — spawn the validator when the FIRST task completes (don't wait for all workers):
Task(validator, team_name="project-name", name="validator",
prompt="Check TaskList for completed tasks. Merge branches as they complete, run tests
incrementally. Write structured report to .claude/swarm/validation-report.md.")
The validator monitors TaskList continuously, merges and tests incrementally. The report must include these JSON-checkable fields:
merge_status: "merged" | "conflict" | "failed" per branchtest_counts: { "total": N, "passed": N, "failed": N, "skipped": N }criteria_verdicts: [ { "criterion": "...", "status": "pass"|"fail", "evidence": "..." } ]overall: "ready" | "needs_fixes"LLM evaluation (the validator's Stop hook) supplements these mechanical checks — it does not replace them.
If issues found:
TaskUpdate(taskId="1", status="in_progress", description="Fix: ...")Escalate to human after 2 failed validation cycles — don't loop indefinitely
Flip ticket checkboxes (if a ticket exists). For each criteria_verdicts entry with status: "pass", flip the matching - [ ] to - [x] in the ticket body via gh issue edit --body-file. On overall "needs_fixes", post a comment summarizing which criteria failed. See /pds:ticket.
Parallel /finish. Run /pds:finish on each task branch simultaneously — each branch gets its own finish (rebase, clean history, post-rebase tests) in parallel. Wait for all to complete before proceeding.
Med/Heavy tier: Spawn a reviewer for pre-human code review:
Task(reviewer, team_name="project-name", name="reviewer",
model="<tier-model>",
prompt="Review the diff against acceptance criteria from Phase 1. Send your review report via SendMessage when done.")
Tier models — med: omit model (sonnet default). Heavy: model="opus".
Write reviewer report to .claude/swarm/review-report.md after receiving it via SendMessage.
Lite tier: Orchestrator performs a lightweight diff review and writes .claude/swarm/review-report.md directly (no reviewer spawn). The PR gate checks file existence, not authorship.
.claude/swarm/review-report.md is required — PR gate checks for this file regardless of tier.
Med/Heavy tier: Spawn a documenter if user-facing docs are affected:
Task(documenter, team_name="project-name", name="documenter",
prompt="Update docs for the changes in this PR. Send summary via SendMessage when done.")
Human approval gate. Present the consolidated package before creating the PR:
ExitPlanMode(plan="## Proposed Merge\n<diff summary>\n\n## Validation\n<key results from validation-report.md>\n\n## Review\n<key findings from review-report.md>")
The parent responds with plan_approval_response. On approval, create PR. On rejection, return to earlier phases as directed.
Create PR with full context. Include Closes #<ticket-num> in the PR body if a ticket exists (read from .claude/swarm/ticket):
gh pr create --title "feat: ..." --body "## Summary\n...\n## Acceptance Criteria\n...\n## Validation\n...\n## Issues\n...\n\nCloses #<ticket-num>"
Note: The PR gate blocks gh pr create unless phase is consolidate+ AND both validation-report.md and review-report.md exist.
Comment on ticket (if one exists) linking the PR: gh issue comment <ticket-num> --body "PR opened: <pr-url>". See /pds:ticket.
Do not merge. The PR is the human gate. The orchestrator creates the PR and reports it — the human merges after review.
When merging worker branches back into the coordinator, use a rebase-then-fast-forward approach to keep history clean.
git rebase coordinator-branchgit merge --ff-only worker-branch (from coordinator worktree)git branch -d worker-branchWhen multiple workers complete in parallel, establish a merge order (foundation-first, smaller changes first) and merge sequentially:
Merge order: [W1, W2, W3, ... WN]
Round 1: W1 rebases onto coordinator (conflict-free), merges
W2..WN rebase onto updated coordinator
Round 2: W2 rebases onto coordinator, merges
W3..WN rebase onto updated coordinator
...
Round N: WN rebases onto coordinator, merges
Done.
The worker that is merging owns their conflicts. They understand their changes best and resolve during rebase. Never force through a conflict resolution without testing.
# Rebase worker onto coordinator
git rebase coordinator-branch
# Squash fixup commits before merging (non-interactive)
git rebase --autosquash coordinator-branch
# Fast-forward merge (coordinator worktree)
git merge --ff-only worker-branch
# Abort a rebase if things go wrong
git rebase --abort
# Continue rebase after resolving conflicts
git add <resolved-files>
git rebase --continue
# Clean up after merge
git branch -d worker-branch
Landing approved PRs on the main branch:
main and in the correct worktreegh pr listgh pr checks <number>, then gh pr merge <number> --mergegit push origin mainTask(scout, team_name="project-name", name="scout",
model="<tier-model>",
prompt="Read .claude/instincts.md. Update counts for re-observed patterns. Propose new instincts. Flag high-confidence patterns for skill promotion. Run /pds:eval on skills exercised in this swarm. Compact .claude/shepherd-journal.md (keep 3 most recent swarms verbatim, digest older into Historical Digest, promote 3+-observation patterns to instincts). Distill key learnings: write 1-2 auto-memory entries (project or feedback type) capturing decisions that future sessions need, patterns worth remembering, and constraints discovered — skip anything derivable from code or git history. If telemetry exists, run scripts/detect-patterns.sh and scripts/efficiency-chart.sh — include pattern results and efficiency ratio in the report. Permission audit: read .claude/settings.local.json and .claude/settings.json — identify glob-style allow patterns in local that should be promoted to project-level settings (exclude one-off paths). Write a '### Permission Promotions' section in the report. Write report to .claude/swarm/scout-report.md. Send summary via SendMessage when done.")
Tier models — lite: model="haiku" (default). Med: omit (haiku default). Heavy: model="sonnet".Task(auditor, team_name="project-name", name="auditor",
prompt="Scan the codebase for tech debt, missing tests, and inconsistencies. File findings as GitHub issues. Send summary via SendMessage when done.")
.claude/swarm/scout-report.md (required — TeamDelete gate checks for this file)/pds:eval and compacts .claude/shepherd-journal.md..claude/telemetry.jsonl exists, scout runs scripts/detect-patterns.sh to detect usage patterns and proposes instinct entries for recurring patterns. Results appear in ### Telemetry-Detected Patterns section of the scout report..claude/settings.local.json and .claude/settings.json. Identifies recurring allow patterns in local (e.g., Bash(git add:*), Bash(gh pr:*)) that aren't already in project-level settings. Recommends promotions in a ### Permission Promotions section of the scout report. One-off commands (specific file paths, session artifacts) are excluded. Only glob-style patterns (Bash(git *:*), Bash(gh *:*), tool names) qualify for promotion.TeamDelete:
SendMessage(type="shutdown_request", recipient="shepherd", content="Swarm complete, shutting down.")
The shepherd marks its current swarm section **Status**: graceful in the journal and responds with shutdown_response. The SubagentStop hook (hooks/scripts/shepherd-finalize.sh) also fires on abort paths, so the journal is finalized even if shutdown is interrupted.SendMessage(type="shutdown_request", recipient="worker-auth", content="Work complete, shutting down.")
SendMessage(type="shutdown_request", recipient="validator", content="Work complete, shutting down.")
# ... for each active agent
Wait for shutdown_response from each agent before proceeding.TeamDelete
Note: The teardown gate blocks TeamDelete unless phase is knowledge AND all 3 reports exist. TeamDelete also fails if agents are still active — always shut down first..worktrees/ directory created during the swarm, run git worktree remove <path> (call ExitWorktree if the orchestrator is inside a worktree).claude/swarm/*.md to docs/swarm-reports/<YYYY-MM-DD-HHmm>/completed and all branches are mergedgit branch -d <branch>gh issue comment <ticket-num> --body "Swarm complete. Archive: docs/swarm-reports/<YYYY-MM-DD-HHmm>/. PR: <pr-url>." The ticket closes automatically when the PR merges (via Closes #<num>). See /pds:ticket.Mechanical enforcement of phase transitions via PreToolUse hooks on the orchestrator:
| Gate | Hook Script | Trigger | Blocks Unless |
|---|---|---|---|
| PR gate | orchestrator-pr-gate.sh | gh pr create in Bash | Phase >= consolidate + validation-report.md + review-report.md exist |
| Teardown gate | orchestrator-teardown-gate.sh | TeamDelete | Phase = knowledge + all 3 reports exist |
| Validator stop | Prompt hook in validator.md | Validator Stop | Structured report written to .claude/swarm/validation-report.md |
| Shepherd finalize | shepherd-finalize.sh (SubagentStop) | Shepherd subagent stops (graceful or abort) | Always runs — finalizes journal; never blocks |
All gates are no-ops when .claude/swarm/ doesn't exist (non-swarm tasks pass through). Phase checks are defense-in-depth — if the phase file is absent, gates fall through to artifact-only checks.
All phase artifacts are written to .claude/swarm/ (ephemeral, archived to docs/swarm-reports/ in cleanup):
| File | Phase | Producer | Required By |
|---|---|---|---|
phase | all | orchestrator | PR gate, teardown gate |
tier | 1 | orchestrator | Dispatch (model selection) |
plan.md | 2 | orchestrator | — |
context.md | 2 | orchestrator | Worker init, shepherd |
contracts.md | 2 | orchestrator | — |
checkpoint.json | all | orchestrator | Restart recovery |
validation-report.md | 4 | validator | PR gate, teardown gate |
review-report.md | 5 | reviewer (or orchestrator at lite tier) | PR gate, teardown gate |
scout-report.md | 6 | scout | Teardown gate |
ticket | 1 | orchestrator (via /pds:ticket) | All phases (ticket reference) |
The shepherd's journal lives at .claude/shepherd-journal.md (project-level, not under .claude/swarm/). It persists across swarms and is gitignored by default. Scout compacts it in Phase 6.
/pds:grill — Requirement interrogation (Phase 1)/pds:ticket — GitHub issue find-or-create, plan + criteria tracking (Phase 1 + all phases)/pds:verify — Completion self-check (Phase 4 worker exit)/pds:finish — Branch completion protocol (Phase 5)/pds:team — Agent roster, coordination tools, and protocols (including graph-vs-substance routing)/pds:voice — Terse register for orchestrator-to-user inline status