Spin up an adversarial planning team for a feature. Usage: /planning <feature-name> <--design> <--gtm> <--fast> <--lite> <-- brief>
From compassnpx claudepluginhub arthtech-ai/arthai-marketplace --plugin compass/planningGenerates a detailed step-by-step plan for the given task, enforcing written structure and completion of phases before execution.
/planningSpin up an adversarial planning team for a feature. Usage: /planning <feature-name> <--design> <--gtm> <--fast> <--lite> <-- brief>
/planningMorning planning workflow with task prioritization
/planningUse before implementing non-trivial features (researches approaches with Context7, Serper, GitHub MCPs)
Automates the full planning team workflow into a single command, using structured adversarial debate rounds to produce a hardened, scope-locked plan.
Parse from the argument string:
feature-name: Required. Kebab-case name for the feature (used in team name).--design: Optional flag. Include Design Thinker + Design Critic teammates.--gtm: Optional flag. Include GTM Expert teammate.--fast: Optional flag. Skip Round 3 (risk debate). DA participates in Rounds 1 and 2 only.--lite: Optional flag. No devil's advocate. PM + Architect only with checklist-based risk evaluation.-- <brief>: Optional inline brief. Everything after -- (not followed by a flag name) is the feature brief.If no feature name is provided, use AskUserQuestion to get it.
Check if an inline brief was provided after -- in the arguments. If yes, use it directly as FEATURE_BRIEF and skip asking.
If no inline brief was provided, ask the user with AskUserQuestion:
Store their response as FEATURE_BRIEF.
If --lite or --fast was passed as an argument, skip the prompt and use that mode directly.
Otherwise, present this prompt to the user with AskUserQuestion:
Planning mode:
[1] Lite — PM + Architect only, checklist risk review (~421x cost, fastest)
[2] Fast — PM + Architect + Devil's Advocate, skip risk round (~431x cost, balanced)
[3] Full — 3 structured debate rounds with DA (~641x cost, most thorough)
[auto] — decide based on feature scope
Your pick? [auto]
If user picks [auto] or presses enter without input, apply auto-detection:
FEATURE_BRIEF has fewer than 50 words AND none of the words "auth", "payment", "migration", "security", "permission", "billing", "database", "breaking" appear → set mode liteFEATURE_BRIEF has 50–150 words OR contains one complexity signal keyword → set mode fastFEATURE_BRIEF has more than 150 words OR contains multiple complexity signal keywords OR user asked for "thorough" or "full" planning → set mode fullMap user choices:
[1] or lite → --lite mode[2] or fast → --fast mode[3] or full → default full modeStore the resolved mode as DEBATE_MODE (values: lite, fast, full).
Before any debate rounds, the PM generates a spec doc that becomes the foundation for all subsequent work. This ensures user stories and edge cases are defined BEFORE architecture decisions.
Create the specs directory:
mkdir -p .claude/specs
Spawn PM agent (subagent_type="product-manager", model="sonnet") for spec generation only:
Prompt:
You are a Product Manager generating a spec doc for the feature "{feature-name}".
Feature brief: {FEATURE_BRIEF}
Generate a spec doc with these sections:
## User Stories
Write 3-7 user stories covering the happy path and key error states.
Format: "As a [user type], I want [action], so that [outcome]"
Each story must have:
- **Story ID**: US-1, US-2, etc.
- **Priority**: P0 (must-have for launch) or P1 (important but deferrable)
- **Acceptance**: specific testable condition that proves this story is done
Example:
US-1 [P0]: As a new developer, I want to install the toolkit with one command,
so that I can start using it without manual configuration.
Acceptance: `npx @arthai/agents install forge .` succeeds and skills are available.
## User Journey
Step-by-step flow from the user's first interaction to completion.
Include:
- **Happy path**: numbered steps (1. User does X → 2. System responds Y → ...)
- **Decision points**: where the user makes a choice (mark with ◆)
- **Error branches**: where things can go wrong (mark with ✗ and show recovery path)
Format as a text flowchart:
## Edge Cases
Structured list of what can go wrong. For each:
- **ID**: EC-1, EC-2, etc.
- **Scenario**: what triggers this edge case
- **Expected behavior**: what should happen (not crash, not hang)
- **Severity**: Critical (blocks user) / High (degrades experience) / Medium (inconvenience)
- **Linked story**: which user story this edge case relates to
## Success Criteria
Measurable outcomes tied to user stories. These become the acceptance criteria
that /implement and /qa use to validate the implementation.
- Each criterion references a story ID
- Each criterion is binary: pass or fail, no subjective judgment
Write the output to .claude/specs/{feature-name}.md with this frontmatter:
---
feature: {feature-name}
generated: {ISO date}
stories: {count}
edge_cases: {count}
---
Store the spec content as FEATURE_SPEC — this is injected into the shared context block for all debate participants.
Spawn an explore-light subagent (model: haiku) to scan for relevant files:
prompt: "For a feature called '{feature-name}', find: (1) related backend routes/services, (2) related frontend pages/components, (3) related database models, (4) any existing tests. Return a structured summary of file paths and their purpose. Feature brief: {FEATURE_BRIEF}"
Store the result as CODEBASE_CONTEXT.
Check for topic wikis relevant to this feature:
# Find all topic wiki indexes
ls -d .claude/wikis/*/wiki/index.md 2>/dev/null
If wikis exist:
wiki/index.md (Tier 1 — small catalog)FEATURE_BRIEF and feature-nameStore relevant wiki excerpts as WIKI_CONTEXT. If no wikis or no matches, set to
"No topic wikis found. Consider /wiki-knowledge-base init {topic} for domain research.".
Create team: planning-{feature-name}
Create these tasks:
| Task | Owner | Subject |
|---|---|---|
| 0 | product-manager | Generate spec doc for {feature-name} (Phase 0 — already done above) |
| 1 | product-manager | Define product scope for {feature-name} (uses spec as input) |
| 2 | architect | Design technical plan for {feature-name} |
| 3 (if --design) | design-thinker | Create design brief for {feature-name} |
| 4 (if not --lite) | devils-advocate | Challenge scope and feasibility for {feature-name} |
Compose this block to inject into every teammate's spawn prompt:
## Project Context
{PROJECT_NAME} - {PROJECT_DESCRIPTION}
Stack: {TECH_STACK}
Stage: {DEVELOPMENT_STAGE}
Auth: {AUTH_APPROACH}
## Feature: {feature-name}
{FEATURE_BRIEF}
## Spec Doc (from Phase 0)
{FEATURE_SPEC}
(Full spec at: .claude/specs/{feature-name}.md)
## Relevant Codebase
{CODEBASE_CONTEXT}
## Your Team
You are on team "planning-{feature-name}". Use SendMessage to collaborate.
Other teammates: {list who's on the team based on flags}
## Debate Mode
This planning session runs in {DEBATE_MODE} mode.
- Full: 3 structured debate rounds (scope, feasibility, risk).
- Fast: 2 structured debate rounds (scope, feasibility). Round 3 is skipped.
- Lite: No devil's advocate. PM + Architect + checklist risk review.
## Rules
- Every claim must include EVIDENCE (code references, cost estimates, or user data).
- Verdicts are final for that round — do not re-litigate decided items.
- If DA recommends KILL and user overrides, record as USER OVERRIDE in the plan.
## Knowledge Base (MANDATORY — follow context-loading.md protocol)
After the plan is finalized, write back:
- Architecture decisions → `.claude/knowledge/shared/decisions.md`
- New patterns established → `.claude/knowledge/shared/patterns.md`
- Domain rules discovered → `.claude/knowledge/shared/domain.md`
## Topic Wiki Context
{WIKI_CONTEXT}
Spawn all teammates in a single message with multiple Task tool calls:
Always spawn:
product-manager (subagent_type="product-manager", model="opus")
architect (subagent_type="architect", model="opus")
Spawn if not --lite:
Spawn if --design flag:
design-thinker (subagent_type="design-studio:think", model="sonnet")
design-critic (subagent_type="design-studio:critique", model="sonnet")
Spawn if --gtm flag:
Facilitate the following rounds in sequence. Each phase completes before the next begins.
Phase 1 — PM CLAIM
The PM posts their scope claim. Required format:
[M1] requirement — BECAUSE reasonCUT-IF conditionPhase 2 — ARCHITECT COUNTER
The Architect responds to the PM's scope claim:
Phase 3 — DA ATTACK (skip if --lite)
The Devil's Advocate rates each must-have item:
Phase 4 — ROUND 1 VERDICT
Orchestrator issues a structured verdict:
Phase 1 — ARCHITECT CLAIM
The Architect presents the technical approach:
Phase 2 — PM COUNTER
The PM challenges the approach:
Phase 3 — DA ATTACK (skip if --lite)
The Devil's Advocate challenges the approach:
CODEBASE_CONTEXT)If --design or --gtm agents are present, their outputs are included here as additional evidence.
Phase 4 — ROUND 2 VERDICT
Orchestrator issues verdict:
Phase 1 — DA CLAIM
The Devil's Advocate presents the risk case:
If --design or --gtm agents are present, include their outputs as supporting evidence.
Phase 2 — PM + ARCHITECT DEFENSE
PM and Architect respond to each risk point:
Each response must include evidence (not just assertion).
Phase 3 — DA FINAL
Devil's Advocate issues final verdict:
Phase 4 — ROUND 3 VERDICT
Orchestrator issues verdict:
After Round 2, the orchestrator runs a checklist:
Each matched item is flagged as a RISK NOTE in the plan.
Plan is APPROVED when ALL of the following are true after all applicable rounds complete:
Escalate to user when ANY of the following are true:
If user overrides a KILL recommendation, record as USER OVERRIDE in the plan.
In lite and fast modes, skip convergence checks for rounds that were not run. Apply only the checks applicable to completed rounds.
After convergence, compute a scope_hash:
scope_hashWhen /implement loads the plan, it can verify the hash against the locked must-haves to detect tampering.
Synthesize teammate outputs into a structured plan and write it to .claude/plans/{feature-name}.md using the Write tool. This file is read by /implement to auto-configure the implementation team.
Plan file format:
---
feature: {feature-name}
debate_mode: {DEBATE_MODE}
scope_hash: {SHA-256 of locked must-haves}
da_confidence: {HIGH|MEDIUM|LOW|N/A}
spec: specs/{feature-name}.md
layers:
- frontend # include if ANY frontend tasks exist
- backend # include if ANY backend tasks exist
---
# Planning Summary: {feature-name}
## Spec Reference
See `.claude/specs/{feature-name}.md` for user stories, user journey, edge cases, and success criteria.
## Problem & User Segment (from PM)
...
## Technical Approach (from Architect)
### API Contract
...
### Database Changes
...
## Design Direction (from Design, if included)
...
## GTM Notes (from GTM, if included)
...
## Scope Lock
**Locked Must-Haves:**
- [M1] requirement — BECAUSE reason
- [M2] ...
**Hard Boundary (Explicit Exclusions):**
- ...
**Deferred:**
- ...
**Rejected:**
- ...
## Implementation Cost Estimate
| Item | Estimate | Confidence |
|------|----------|------------|
| {task} | {S/M/L/XL} | {HIGH/MED/LOW} |
| **Total** | **{range}** | **{confidence}** |
## Debate Record
### Round 1: Scope Debate
**Verdict:** {LOCKED / DEFERRED / REJECTED / UNRESOLVED items}
### Round 2: Feasibility Debate
**Verdict:** {APPROVED APPROACH / MODIFICATIONS / COST ESTIMATE / RISKS ACKNOWLEDGED}
### Round 3: Risk and Cost Debate
**Verdict:** {ACCEPTED RISKS / MITIGATED RISKS / KILL CRITERIA / CONFIDENCE / PLAN STATUS}
*(Omitted in fast/lite mode)*
## Task Breakdown
### Backend Tasks
1. ...
### Frontend Tasks
1. ...
## Acceptance Criteria
1. ...
## Success Metrics
...
Rules for the layers field:
frontend if the plan has ANY frontend tasks (UI changes, component work, frontend API integration)backend if the plan has ANY backend tasks (routes, services, models, migrations, domain logic changes)/implement spawns — get it rightPresent the plan to the user for review.
After user reviews the plan:
/implement {feature-name} to start implementation."| Role | Model | Rationale |
|---|---|---|
| explore-light (scan) | haiku | Just file search |
| product-manager | opus | Strategic reasoning |
| architect | opus | Technical design decisions |
| devils-advocate | sonnet | Adversarial pressure, not generation |
| design-thinker | sonnet | Creative but bounded |
| design-critic | sonnet | Evaluation, not generation |
| gtm-expert | sonnet | Strategic but scoped |
Cost by mode:
| Mode | Approx. Cost | Speed | Agents | Rounds |
|---|---|---|---|---|
| Lite | ~421x | Fastest | PM + Architect | 2 rounds + checklist |
| Fast | ~431x | Balanced | PM + Architect + DA | Rounds 1–2 only |
| Full | ~641x | Most thorough | PM + Architect + DA | All 3 rounds |
After the plan is written and presented:
If autopilot is active (check .claude/.workflow-state.json mode == "autopilot"):
/implement {feature-name}phase: "implement"If guided mode (no autopilot or mode != "autopilot"):
.claude/plans/{feature-name}.md. Next step: /implement {feature-name} to start coding. Should I proceed?"/implement