Orchestrate baselayer subagents for complex tasks. Defines available agents, their skills, and workflows for multi-agent scenarios. Load when coordinating work across agents, delegating tasks, or deciding which agent handles what.
Orchestrates baselayer subagents by matching tasks to the right agent and skill combinations.
/plugin marketplace add outfitter-dev/agents/plugin install baselayer@outfitterThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/agent-skills.mdreferences/workflows.mdOrchestrate baselayer subagents by matching tasks to the right agent + skill combinations.
For complex multi-agent tasks, start with the Plan subagent to research and design the orchestration strategy before execution.
Complex task arrives
│
├─► Plan subagent (research phase)
│ └─► Explore codebase, gather context
│ └─► Identify which agents and skills needed
│ └─► Design execution sequence (sequential, parallel, or hybrid)
│ └─► Return orchestration plan
│
└─► Execute plan (dispatch agents per plan)
Plan subagent benefits:
When to use Plan subagent:
Coordination uses roles (what function is needed) mapped to agents (who fulfills it). This allows substitution when better-suited agents are available.
| Role | Agent | Purpose |
|---|---|---|
| coding | senior-dev | Build, implement, fix, refactor |
| reviewing | ranger | Evaluate code, PRs, architecture, security |
| research | analyst | Investigate, research, explore |
| debugging | debugger | Diagnose issues, trace problems |
| testing | tester | Validate, prove, verify behavior |
| challenging | skeptic | Challenge complexity, question assumptions |
| specialist | specialist | Domain expertise (CI/CD, design, accessibility, etc.) |
| patterns | pattern-analyzer | Extract reusable patterns from work |
Additional agents may be available in your environment (user-defined, plugin-provided, or built-in). When dispatching:
Examples of role substitution:
senior-engineer, developer, senior-devsecurity-auditor, code-reviewer, rangerresearch-engineer, docs-librarian, analystcicd-expert, design-agent, accessibility-auditor, bun-expertRoute by role, then select the best available agent for that role:
User request arrives
│
├─► "build/implement/fix/refactor" ──► coding role
│
├─► "review/critique/audit" ──► reviewing role
│
├─► "investigate/research/explore" ──► research role
│
├─► "debug/diagnose/trace" ──► debugging role
│
├─► "test/validate/prove" ──► testing role
│
├─► "simplify/challenge/is this overkill" ──► challenging role
│
├─► "deploy/configure/CI/design/a11y" ──► specialist role
│
└─► "capture this workflow/make reusable" ──► patterns role
One agent completes, passes to next:
research (investigate) → coding (implement) → reviewing (verify) → testing (validate)
Use when: Clear phases, each requires different expertise.
Multiple agents work simultaneously using run_in_background: true:
┌─► reviewing (code quality)
│
task ──┼─► research (impact analysis)
│
└─► testing (regression tests)
Use when: Independent concerns, time-sensitive, comprehensive coverage needed.
Build → challenge → refine:
coding (propose) ←→ challenging (evaluate) → coding (refine)
Use when: Complex architecture, preventing over-engineering, high-stakes decisions.
Narrow down, then fix:
research (scope) → debugging (root cause) → coding (fix) → testing (verify)
Use when: Bug reports, production issues, unclear symptoms.
| Task | Skills |
|---|---|
| New feature | software-engineering, test-driven-development |
| Bug fix | debugging-and-diagnosis → software-engineering |
| Refactor | software-engineering + complexity-analysis |
| API endpoint | hono-dev, software-engineering |
| React component | react-dev, software-engineering |
| AI feature | ai-sdk, software-engineering |
| Task | Skills |
|---|---|
| PR review | code-review |
| Architecture review | software-architecture |
| Performance audit | performance-engineering |
| Security audit | security-engineering |
| Pre-merge check | code-review + scenario-testing |
| Task | Skills |
|---|---|
| Codebase exploration | codebase-analysis |
| Research question | research-and-report |
| Unclear requirements | pathfinding |
| Status report | status-reporting, report-findings |
| Task | Skills |
|---|---|
| Feature validation | scenario-testing |
| TDD implementation | test-driven-development |
| Integration testing | scenario-testing |
Run agents asynchronously for parallel work:
{
"description": "Security review",
"prompt": "Review auth module for vulnerabilities",
"subagent_type": "ranger",
"run_in_background": true
}
Retrieve results with TaskOutput:
{
"task_id": "agent-abc123",
"block": true
}
Sequence agents for complex workflows — each agent's output informs the next:
research agent → "Found 3 auth patterns in use"
↓
coding agent → "Implementing refresh token flow using pattern A"
↓
reviewing agent → "Verified implementation, found 1 issue"
↓
coding agent → "Fixed issue, ready for merge"
Pass context explicitly between agents via prompt.
Continue long-running work across invocations:
{
"description": "Continue security analysis",
"prompt": "Now examine session management",
"subagent_type": "ranger",
"resume": "agent-abc123"
}
Agent preserves full context from previous execution.
Use cases:
Override model for specific needs:
{
"subagent_type": "analyst",
"model": "haiku" // Fast, cheap for exploration
}
CLAUDE.md before applying defaultsWhen agents face implementation choices:
These principles apply across all roles. Agents should surface decisions to the orchestrator when trade-offs are significant.
Orchestrators and agents should:
Progress format:
░░░░░░░░░░ [1/5] research: Exploring auth patterns
▓▓▓▓░░░░░░ [2/5] coding: Implementing refresh token flow
"I need to build X" → coding role + TDD skills
"Review this PR" → reviewing role + code-review
"Why is this broken?" → debugging role + debugging-and-diagnosis
"Is this approach overkill?" → challenging role + complexity-analysis
"Prove this works" → testing role + scenario-testing
"What's the codebase doing?" → research role + codebase-analysis
"Deploy to production" → specialist role + domain skills
"Make this workflow reusable" → patterns role + patternify
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.