From code-foundations
Executes full planning pipeline for medium/complex whiteboarding tasks: discover codebase and clarify intent, classify, explore, detail, save, check, confirm, handoff.
npx claudepluginhub ryanthedev/code-foundationsThis skill uses the workspace's default tool permissions.
Standard/Full planning pipeline: **Discover -> Classify -> Explore -> Detail -> Save -> Check -> Confirm -> Handoff**
Creates structured plans for multi-step tasks including software features, implementations, research, or projects. Deepens plans via interactive sub-agent reviews.
Creates structured plans for multi-step tasks like software features, research workflows, events, or study plans. Deepens existing plans with interactive sub-agent reviews.
Share bugs, ideas, or general feedback.
Standard/Full planning pipeline: Discover -> Classify -> Explore -> Detail -> Save -> Check -> Confirm -> Handoff
The plan is a contract between whiteboarding and building. It specifies WHAT and WHY at the strategic level, with explicit interfaces between phases.
Thinking effort: Planning benefits from max effort. If not already at max, suggest the user increase it before proceeding.
Before any work: Read($CLAUDE_PLUGIN_ROOT/references/pre-gate-standards.md)
TaskCreate for each step, TaskUpdate with blockedBy to enforce ordering:
DISCOVER: Codebase search | DISCOVER: Questioning | CLASSIFY | EXPLORE | DETAIL | SAVE | CHECK | CONFIRM | HANDOFF
CHECK runs on all tracks — never skip independent review.
Before asking ANY questions, load Skill(code-foundations:code-standards) to generate or update docs/code-standards.md. The skill handles staleness detection, codebase scanning, legacy migration, and writing.
See also: pattern-reuse-gate.md
Load the clarify skill: Skill(code-foundations:clarify)
Use its framework to classify what's unclear (fault type + ambiguity direction) and generate targeted questions. Do not duplicate the questioning protocol here -- the skill has it.
Enforcement: Each question MUST use AskUserQuestion tool. No proceeding until answered. Short-circuit: zero questions if already clear.
STOP. Cannot proceed until ALL true:
AskUserQuestion, each answer receivedA confirmed problem statement arrives from the shared steps in the whiteboarding command. DISCOVER refines it with deeper codebase context — don't redo clarification from scratch.
Review the existing problem statement against what deeper discovery found. If it holds, proceed. If discovery reveals the problem is different or broader than stated, update and re-confirm via AskUserQuestion: "Discovery found [X]. Updated problem statement: [Y]. Does this still capture what you want?"
Classify using signals: files touched (Medium 4-8, Complex 9+), patterns (Medium 2-3 some new, Complex multiple cross-cutting), cross-cutting concerns (Medium 1-2, Complex 3+), uncertainty (Medium approach unclear, Complex requirements uncertain), phase count (Medium 3-5, Complex 5-7).
State explicitly: "This is a [Medium/Complex] task. [1-sentence justification]." If uncertain, choose higher.
| Track | Phases | Approach Comparison |
|---|---|---|
| Medium | 3-5, ~100-150 words/phase | 2 approaches |
| Complex | 5-7, ~100-150 words/phase | 2-3 + pre-mortem |
Hard cap: 7 phases. More than 7 -> split into multiple plans.
Research BEFORE proposing -- uninformed proposals waste the user's decision-making.
Codebase: How similar problems are solved, existing libraries/patterns, intentionally omitted patterns (check git history).
Web (when technology choice is involved): Compare libraries/frameworks, check current best practices. Search for "[tech A] vs [tech B] [year]", "[domain] best practices [year]".
Approaches must be structurally different (different technology, pattern, or architecture):
| Approach | Trade-offs | Best When | Research Source |
|---|---|---|---|
| Option A | [pros/cons] | [conditions] | [codebase/web] |
| Option B | [pros/cons] | [conditions] | [codebase/web] |
| Failure Mode | Probability | Impact | Which Approach Survives? |
|---|---|---|---|
| [failure] | LOW/MED/HIGH | LOW/MED/HIGH | [approach] |
After presenting the approach table, name a recommendation with 1-sentence rationale — not neutral presentation. The user wants an opinion, then picks.
MUST use AskUserQuestion with options:
Hard gate: Cannot proceed to DETAIL until the user picks an approach via AskUserQuestion. Writing "Going with X" and moving on is a violation — the user must answer. No silent defaults, no "I'll flag it at confirm."
Question style: See adaptive-questioning.md. Encode the recommendation in the option labels (e.g., "Use JWT (recommended)") so confirmatory mode works inside the structured tool.
Record chosen approach, rationale, and fallback.
The plan specifies WHAT and WHY. Subagents determine HOW. Four readers: orchestrator (phase names, ordering, DW counts), pre-gate (goal, scope, constraints, approach notes, file hints), post-gate (goal, done-when), human (strategic intent, rationale).
No implementation details in phases -- pre-gate writes pseudocode after fresh discovery. Plans must be pipeline-compatible -- deterministic rules, not interactive user prompts between sub-phases.
### Phase N: [Name]
**Model:** [recommended model]
**Skills:** [assigned at SAVE -- skills or `none -- [reason]`]
**Goal:** [One sentence (Simple) | 1-2 sentences (Medium/Complex)]
**Scope:**
- IN: [covered]
- OUT: [excluded]
**Constraints:** [non-discoverable requirements -- omit if none]
[Medium/Complex only]
**Approach notes:** [non-discoverable user decisions -- omit if none]
**File hints:** `path/` -- [why relevant]
**Depends on:** [Phase X] | **Unlocks:** [Phase Y]
[/Medium/Complex only]
**Done when:**
- [ ] DW-N.1: [verifiable criterion]
[Medium/Complex only]
**Difficulty:** LOW / MEDIUM / HIGH
**Uncertainty:** [what could change, or "None"]
[/Medium/Complex only]
DW-ID format: DW-{phase}.{item} -- every done-when item gets a stable ID.
Only non-discoverable user decisions. Test: could codebase search find it? If yes, it does NOT belong.
Before each phase: Is it needed for success criteria? Could we ship without it? If "not needed now" -> remove. Phase granularity test: each phase produces a deliverable meaningful to the orchestrator and verifiable by post-gate. If it's an internal component of another phase's deliverable, fold it in.
Phase counts: Medium 3-5, Complex 5-7. Prefer fewer. 200-word cap per phase. Express independent phases as DAG -- don't artificially linearize.
docs/plans/YYYY-MM-DD-<topic-slug>.md
Model detection per phase:
OPUS_KEYWORDS = [refactor, architect, migrate, redesign, rewrite, overhaul]
HAIKU_KEYWORDS = [config, rename, typo, bump, cleanup, delete, remove]
DW items <= 2 AND file hints <= 2 areas AND no OPUS_KEYWORDS -> haiku
DW items >= 6 OR file hints >= 6 areas OR any OPUS_KEYWORD -> opus
Otherwise -> omit (building uses default)
Skill assignment (EVERY phase MUST have **Skills:** field):
plugin:skill-name lines)**Skills:** on every phase -- none -- [reason] valid, omission NOT valid# Plan: [Topic]
**Created:** YYYY-MM-DD
**Status:** ready
**Complexity:** [simple/medium/complex]
---
## Context
[Problem statement from Step 1]
## Constraints
- [constraints]
[Medium/Complex only]
## Chosen Approach
**[Name]** -- [Rationale]. **Fallback:** [1 sentence]
## Rejected Approaches
- **[Name]:** [1 sentence why rejected]
[/Medium/Complex only]
---
## Implementation Phases
(Use phase template from Step 4)
---
## Test Coverage
**Level:** [100% / Backend only / Backend + frontend / None / Per-phase]
## Test Plan
- [ ] [tests] [Medium/Complex only] + Integration + Manual [/Medium/Complex only]
[Medium/Complex only]
## Assumptions
| Assumption | Confidence | Verify Before Phase | Fallback If Wrong |
## Decision Log
| Decision | Alternatives Considered | Rationale | Phase |
[/Medium/Complex only]
---
## Notes
- [edge cases, gotchas, open questions]
---
## Execution Log
_To be filled during /code-foundations:building_
mkdir -p docs/plans, write plan file. Do NOT commit -- the plan is a working document, not a deliverable. Building handles worktree visibility by copying the plan file after worktree creation.
ALL tracks: Dispatch subagent to review saved plan with fresh eyes. Never skip — independent review catches blind spots regardless of task size.
Agent: sonnet, "Review whiteboarding plan"
Prompt: Review docs/plans/<plan>.md for structural issues.
Checklist:
- Structural: every constraint maps to a phase, done-when items cover problem statement,
no scope overlap, union covers full feature, depends-on references exist, no orphan phases,
approach notes only non-discoverable, file hints present, done-when observable + has DW-ID, YAGNI
- Coherence: no contradictions, Phase N output matches N+1 input,
user-observable output exists, high-uncertainty phases early
- Skills: every phase has Skills field, skills match work type, skills actually available
Output: PASS or FINDINGS with specific fix recommendations.
After return: PASS -> proceed. FINDINGS -> fix issues, then proceed.
Present to user: phases, goals, skill assignments, constraint coverage, review results.
Simple: "Does this look right? Anything to add or change?"
Medium/Complex: Structured summary with phases, constraint -> phase mapping, review results, remaining questions.
Question style: See adaptive-questioning.md. If the user has been terse during planning, present the plan with assumptions stated rather than asking open-ended "thoughts?"
Ask: "How much test coverage?" Options: 100% (recommended), Backend only, Backend + frontend, None, Per-phase. Record in plan file under ## Test Coverage.
If changes requested: update plan. Structural changes -> re-run CHECK. Minor changes -> update and re-present.
AskUserQuestion: "Plan saved. How would you like to proceed?"
/code-foundations:building docs/plans/<plan>.mdQuestion style: See adaptive-questioning.md. The "Recommended" tag on Build now is the confirmatory cue — keep it there.