Runtime orchestration of agent teams for parallel work. Use when the user says "create a team", "/orchestrate", or wants multiple Claude instances working together. Supports patterns: review (parallel code review), debate (competing hypotheses), plan (explore + plan + critique), build (parallel implementation), research (parallel investigation), cross-agent (multi-engine comparison via CLI bridges like Codex), and custom teams. This skill handles live team coordination at runtime — distinct from cadre:init which bootstraps static team definitions and agent configurations for a project. TRIGGERS - Use this skill when user says: - "/orchestrate review src/" or "review this code with a team" - "/orchestrate debate 'why does X fail'" or "investigate this bug with competing theories" - "/orchestrate plan 'add authentication'" or "plan this feature with a team" - "/orchestrate build 'calculator module'" or "build this with a team" - "/orchestrate research 'compare React vs Vue'" or "research this topic in parallel" - "/orchestrate cleanup" or "shut down the team" - Any request to create an agent team or coordinate multiple Claude instances
From cadrenpx claudepluginhub danielscholl/claude-sdlc --plugin cadreThis skill is limited to using the following tools:
reference/patterns.mdGuides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
Orchestrate teams of Claude Code sessions for parallel work. Parse user input to determine the pattern and target, then execute the appropriate team workflow.
Parse $ARGUMENTS to determine:
If no pattern is recognized, ask the user which pattern to use.
Before creating any team, verify the environment:
echo $CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS
If not 1, tell the user to enable it and stop.
Each pattern follows the same lifecycle:
Teammate(spawnTeam)TaskCreateTask tool (team_name + name params)TaskList and incoming messagesFor pattern-specific teammate prompts, task structures, and configurations, see reference/patterns.md.
Every teammate prompt MUST include these steps (learned from testing):
You are "{name}", a {role} on the {team-name} team.
Your task: {specific task description with file paths}
Steps:
1. Check TaskList and claim task #{n} (set owner to your name, mark in_progress)
2. {Pattern-specific work steps}
3. Mark task #{n} as completed
4. Send a message to the team lead with a brief summary of your findings
For debate/plan patterns, add:
5. Send messages to {other teammates by name} to {challenge/review/discuss} their findings
Default to sonnet for teammates to manage token costs. Use opus only when the user
explicitly requests it or the task requires deep reasoning (architecture, complex debugging).
Pass the model parameter when spawning: model: "sonnet" in the Task tool call.
Reliable shutdown (handles the known flaky behavior):
shutdown_request to each teammate via SendMessageshutdown_approved messages (up to 30 seconds)shutdown_requestTeammate(cleanup)rm -rf ~/.claude/teams/{team-name} ~/.claude/tasks/{team-name}When $ARGUMENTS is "cleanup":
spawnTeam fails with "already leading team": run Teammate(cleanup) first, retryBefore spawning teams with 4+ teammates, warn the user: "This will spawn {N} teammates, each using its own context window. Token usage scales linearly with team size. Proceed?"