This skill should be used when coordinating agents, delegating tasks to specialists, or when dispatch agents, which agent, or multi-agent are mentioned.
Coordinates multiple AI agents by delegating specialized tasks to the most suitable subagents for complex workflows.
npx claudepluginhub outfitter-dev/outfitterThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/agent-skills.mdreferences/workflows.mdOrchestrate outfitter subagents by matching tasks to the right agent + skill combinations.
For complex multi-agent tasks, start with the Plan subagent to research and design the orchestration strategy before execution.
Complex task arrives
│
├─► Plan subagent (research stage)
│ └─► Explore codebase, gather context
│ └─► Identify which agents and skills needed
│ └─► Design execution sequence (sequential, parallel, or hybrid)
│ └─► Return orchestration plan
│
└─► Execute plan (dispatch agents per plan)
Plan subagent benefits:
When to use Plan subagent:
For long-running orchestration, load the context-management skill. It teaches:
Key principle: Main conversation context is precious. Delegate exploration and research to subagents — only their summaries return, keeping main context lean.
Coordination uses roles (what function is needed) mapped to agents (who fulfills it). This allows substitution when better-suited agents are available.
| Role | Agent | Purpose |
|---|---|---|
| coding | engineer | Build, implement, fix, refactor |
| reviewing | reviewer | Evaluate code, PRs, architecture, security |
| research | analyst | Investigate, research, explore |
| debugging | debugger | Diagnose issues, trace problems |
| testing | tester | Validate, prove, verify behavior |
| challenging | skeptic | Challenge complexity, question assumptions |
| specialist | specialist | Domain expertise (CI/CD, design, accessibility, etc.) |
| patterns | analyst | Extract reusable patterns from work |
Additional agents may be available in your environment (user-defined, plugin-provided, or built-in). When dispatching:
Examples of role substitution:
senior-engineer, developer, engineersecurity-auditor, code-reviewer, reviewerresearch-engineer, docs-librarian, analystcicd-expert, design-agent, accessibility-auditor, bun-expertRoute by role, then select the best available agent for that role:
User request arrives
│
├─► "build/implement/fix/refactor" ──► coding role
│
├─► "review/critique/audit" ──► reviewing role
│
├─► "investigate/research/explore" ──► research role
│
├─► "debug/diagnose/trace" ──► debugging role
│
├─► "test/validate/prove" ──► testing role
│
├─► "simplify/challenge/is this overkill" ──► challenging role
│
├─► "deploy/configure/CI/design/a11y" ──► specialist role
│
└─► "capture this workflow/make reusable" ──► patterns role
One agent completes, passes to next:
research (investigate) → coding (implement) → reviewing (verify) → testing (validate)
Use when: Clear stages, each requires different expertise.
Multiple agents work simultaneously using run_in_background: true:
┌─► reviewing (code quality)
│
task ──┼─► research (impact analysis)
│
└─► testing (regression tests)
Use when: Independent concerns, time-sensitive, comprehensive coverage needed.
Build → challenge → refine:
coding (propose) ←→ challenging (evaluate) → coding (refine)
Use when: Complex architecture, preventing over-engineering, high-stakes decisions.
Narrow down, then fix:
research (scope) → debugging (root cause) → coding (fix) → testing (verify)
Use when: Bug reports, production issues, unclear symptoms.
| Task | Skills |
|---|---|
| New feature | software-craft, tdd-fieldguide |
| Bug fix | debugging → software-craft |
| Refactor | software-craft + sanity-check |
| API endpoint | hono-fieldguide, software-craft |
| React component | react-fieldguide, software-craft |
| AI feature | software-craft |
| Task | Skills |
|---|---|
| PR review | code-review |
| Architecture review | systems-design |
| Performance audit | performance |
| Security audit | security |
| Pre-merge check | code-review + prove-it-works |
| Task | Skills |
|---|---|
| Codebase exploration | codebase-analysis |
| Research question | research |
| Unclear requirements | pathfinding |
| Status report | status, report-findings |
| Task | Skills |
|---|---|
| Feature validation | prove-it-works |
| TDD implementation | tdd-fieldguide |
| Integration testing | prove-it-works |
Run agents asynchronously for parallel work:
{
"description": "Security review",
"prompt": "Review auth module for vulnerabilities",
"subagent_type": "reviewer",
"run_in_background": true
}
Retrieve results with TaskOutput:
{
"task_id": "agent-abc123",
"block": true
}
Sequence agents for complex workflows — each agent's output informs the next:
research agent → "Found 3 auth patterns in use"
↓
coding agent → "Implementing refresh token flow using pattern A"
↓
reviewing agent → "Verified implementation, found 1 issue"
↓
coding agent → "Fixed issue, ready for merge"
Pass context explicitly between agents via prompt.
Continue long-running work across invocations:
{
"description": "Continue security analysis",
"prompt": "Now examine session management",
"subagent_type": "reviewer",
"resume": "agent-abc123"
}
Agent preserves full context from previous execution.
Use cases:
Override model for specific needs:
{
"subagent_type": "analyst",
"model": "haiku" // Fast, cheap for exploration
}
CLAUDE.md before applying defaultsWhen agents face implementation choices:
These principles apply across all roles. Agents should surface decisions to the orchestrator when trade-offs are significant.
Orchestrators and agents should:
Progress format:
░░░░░░░░░░ [1/5] research: Exploring auth patterns
▓▓▓▓░░░░░░ [2/5] coding: Implementing refresh token flow
CRITICAL: Subagents MUST NOT perform git operations (commit, push, branch creation) when running in parallel.
Only the orchestrator handles git state. Subagents write code to the filesystem and report completion.
"I need to build X" → coding role + TDD skills
"Review this PR" → reviewing role + code-review
"Why is this broken?" → debugging role + debugging
"Is this approach overkill?" → challenging role + sanity-check
"Prove this works" → testing role + prove-it-works
"What's the codebase doing?" → research role + codebase-analysis
"Deploy to production" → specialist role + domain skills
"Make this workflow reusable" → patterns role + codify
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.