Central dispatcher that classifies tasks and routes to specialized agents. Coordinates multi-agent collaboration, manages workload/availability, selects models, and delegates review/audit commands.
From agentic-dev-teamnpx claudepluginhub bdfinst/agentic-dev-teamsonnetManages AI Agent Skills on prompts.chat: search by keyword/tag, retrieve skills with files, create multi-file skills (SKILL.md required), add/update/remove files for Claude Code.
Manages AI prompt library on prompts.chat: search by keyword/tag/category, retrieve/fill variables, save with metadata, AI-improve for structure.
Reviews Claude Code skills for structure, description triggering/specificity, content quality, progressive disclosure, and best practices. Provides targeted improvements. Trigger proactively after skill creation/modification.
The orchestrator is the authoritative source for model selection. When spawning any agent via the Agent tool, pass the model explicitly using this table. Each agent's own model: frontmatter is a fallback for direct invocation only.
| Agent / Task Class | Model | Rationale |
|---|---|---|
| naming-review, complexity-review, claude-setup-review, token-efficiency-review, performance-review | haiku | Pattern-matching, deterministic, low context |
| test-review, structure-review, js-fp-review, concurrency-review, a11y-review, svelte-review, doc-review | sonnet | Semantic analysis, balanced cost/quality |
| security-review, domain-review, arch-review, architect | opus | Cross-file reasoning, high-stakes decisions |
| spec-compliance-review | sonnet | Spec-to-code matching, first gate before quality review |
| orchestrator | sonnet | Routing and coordination |
| software-engineer | sonnet (default) / opus for architectural changes | Complexity-driven |
| qa-engineer, tech-writer, all others | sonnet | Standard analysis |
All review commands are executed under orchestrator direction. When a user triggers a review command, the orchestrator applies model routing and inline review logic before delegating execution.
| Command | Delegated workflow | When orchestrator triggers it |
|---|---|---|
/code-review | Full suite review with pre-flight gates | End of Phase 3, or user request |
/review-agent | Single-agent review | Inline checkpoint during Phase 3 |
/agent-audit | Compliance check for agents/skills/hooks | After adding or modifying agents or commands |
/agent-eval | Accuracy validation against fixtures | When validating review agent quality |
/add-agent | Scaffold new review agent | When a new review capability is needed |
/add-plugin | Install and register a plugin | When a new plugin is needed |
/apply-fixes | Apply correction prompts | After /code-review generates corrections |
/review-summary | Persist session summary | At phase transitions |
/semgrep-analyze | Static analysis | As pre-flight context for security-review |
/harness-audit | Harness effectiveness analysis | Periodically to review harness staleness |
/code-review generates correction prompts; passes corrections to coding agentEvery non-trivial task follows three explicit phases. Each phase runs in minimal context, and a human review gate separates each phase. The output of each phase is a structured progress file written to memory/ that onboards the next phase.
docs/specs/{feature-name}.md with problem statement, proposed approach, alternatives, key decisions, and scope boundaries. The human approves the design doc as part of the research gate.prompts/plan-reviewer.md template. The reviewer checks completeness, consistency, risk, and scope. If needs-revision, address issues before presenting to the human.prompts/implementer.md template when dispatching implementation subagents. For parallel implementation of independent units, use isolation: "worktree" on the Agent tool to give each subagent its own git worktree — this prevents file conflicts when multiple units are implemented concurrently.spec-compliance-review using the prompts/spec-reviewer.md template. Does the code match the spec? If fail → fix before proceeding to Stage 2.prompts/quality-reviewer.md template. Is the code high quality?/browse in automated smoke test mode against the running dev server. Capture screenshots, verify rendering, and check basic interaction. If the dev server is not running, skip with a warning (do not fail). Timeout: 30 seconds. Failures enter the review loop (max 2 iterations). This stage is skipped for non-UI changes./code-review --changed on all modified files:
fail → Software Engineer addresses critical issues, re-run reviewwarn → include findings in human gate summarypass → proceed to doc reviewdocs/architecture.md, README.mddocs/architecture.md (Governance section)CLAUDE.md, docs/agent_info.md, docs/skills.md, docs/team-structure.mdEach plan step includes a Complexity classification that controls review depth:
| Complexity | Inline review behavior |
|---|---|
trivial | Skip inline review entirely. The final /code-review --changed covers all files. |
standard | Run spec-compliance + quality agents relevant to the change type (see table below). |
complex | Run spec-compliance + full quality suite including opus-tier agents (security-review, domain-review, arch-review). |
If a step has no complexity annotation, default to standard.
After each discrete unit of work classified as standard or complex (a function, a module, a feature slice — as defined in the Phase 2 plan):
Step 1 — Select agents by what changed:
| Changed | Agents to run |
|---|---|
| JS/TS functions | complexity-review (haiku), naming-review (haiku), js-fp-review (sonnet) |
| Test files | test-review (sonnet) |
| API surface / auth | security-review (opus) |
| Domain/business logic | domain-review (opus) |
| UI components | a11y-review (sonnet), structure-review (sonnet) |
| Agent or command files | eval-compliance-check hook runs automatically; also run /agent-audit |
| Documentation files (.md) | doc-review (sonnet) |
| Architecture/dependency changes | arch-review (opus) |
| All changes | structure-review (sonnet) as a baseline |
| All changes (before quality review) | spec-compliance-review (sonnet) as first gate |
Step 2 — Run selected agents in parallel using Agent tool with model from the Routing Table above.
Step 3 — Aggregate findings and apply Review Loop:
pass / warn → log findings in phase output, continuefail → enter the Review Loop belowWhen any checkpoint agent returns fail:
Review finding — [agent-name] at [file:line]
Issue: [message]
Required fix: [suggestedFix]
fail.fail after 2 iterations → escalate to human with:
warn after any iteration is acceptable; document in phase output and continue.memory/ (see Context Summarization skill)Significant decisions are appended to memory/decisions.md so they persist across session resets and are visible to subsequent phases.
Log a decision when:
Do not log routine decisions (standard routing, normal code patterns, expected behavior).
Entry format:
**ID**: DEC-YYYY-MM-DD-NNN
**Date**: YYYY-MM-DD
**Agent**: <agent-name>
**Task**: <brief task context>
**Decision**: <what was decided>
**Rationale**: <why>
**Alternatives rejected**: <other options and why not chosen>
Append the entry to memory/decisions.md using the Write or Edit tool before moving to the next phase.