Software implementation planning with file-based persistence (.plan/). Use when asked to plan, break down a feature, or starting complex tasks. Apply proactively before non-trivial coding.
From compound-engineeringnpx claudepluginhub iliaal/compound-engineering-plugin --plugin compound-engineeringThis skill uses the workspace's default tool permissions.
references/operational-patterns.mdscripts/init-plan.shSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Executes pre-written implementation plans: critically reviews, follows bite-sized steps exactly, runs verifications, tracks progress with checkpoints, uses git worktrees, stops on blockers.
Context window = RAM (volatile, limited)
Filesystem = Disk (persistent, unlimited)
→ Anything important gets written to disk.
Planning tokens are cheaper than implementation tokens. Front-load thinking; scale effort to complexity.
Scaffold the .plan/ directory with pre-populated templates using init-plan.sh:
bash init-plan.sh "Feature Name"
This creates .plan/ with task_plan.md, findings.md, and progress.md -- each pre-populated with the correct structure. Also adds .plan/ to .gitignore.
Planning files are ephemeral working state -- do not commit them. When starting a new feature, old .plan/ files are overwritten. Within a multi-phase feature, use numbered intermediate files (01-setup.md, 02-phase1-complete.md) to preserve state across phases.
Note: .plan/ is for ephemeral working state during implementation (scratch notes, progress tracking). docs/plans/ is for the formal plan document created by workflows:plan (committed, living documents). Both coexist -- .plan/ supports the work session, docs/plans/ stores the committed plan.
| File | Purpose | Update When |
|---|---|---|
.plan/task_plan.md | Phases, tasks, decisions, errors | After each phase |
.plan/findings.md | Research, discoveries, code analysis | After any discovery |
.plan/progress.md | Session log, test results, files changed | Throughout session |
For projects with existing code (not greenfield), discover the test landscape before planning:
Glob("**/*test*") and Grep("<feature-keyword>", glob="**/*.{ts,php,py}")package.json scripts, pytest.ini, phpunit.xml, CI config)Skip for greenfield projects where no tests exist yet.
# Plan: [Feature/Task Name]
## Approach
[1-3 sentences: what and why]
## Scope
- **In**: [what's included]
- **Out**: [what's explicitly excluded]
## File Structure
[Map ALL files that will be created or modified, with one-line responsibility for each. Lock in decomposition decisions before defining tasks. Write for a zero-context engineer.]
| File | Action | Responsibility |
|------|--------|---------------|
| `path/to/file.ts` | Create | [what this file does] |
| `path/to/existing.ts` | Modify | [what changes and why] |
## Phase 1: [Name]
**Files**: [specific files, max 5-8 per phase]
**Posture**: [test-first | characterization-first | external-delegate]
**Tasks**:
- [ ] [Verb-first atomic task] -- `path/to/file.ts`
- [ ] [Next task]
**Verify**: [specific test: "POST /api/users → 201", not "test feature"]
**Exit**: [clear done definition]
## Phase 2: [Name]
...
## Execution Posture
- [Optional per-phase signals that shape implementation sequencing]
- `test-first`: write failing test before implementation
- `characterization-first`: capture existing behavior before changing it
- `external-delegate`: mark units suitable for parallel/external execution
## Deferred to Implementation
- [Things intentionally left unspecified -- details that depend on what you find in the code]
## Open Questions
- [Max 3, only truly blocking unknowns]
No placeholders in tasks. Every task must contain actual code patterns, commands, or file paths -- not vague directives. Forbid: "TBD", "TODO", "handle errors appropriately", "add validation", "implement as needed", "similar to above", "Similar to Task N", "See above." Each task must be self-contained -- repeat the spec, code pattern, or file path in every task that needs it. The implementer may read tasks out of order, and vague tasks produce vague implementations. If a step cannot be specified concretely, it needs further breakdown before it belongs in a plan.
Type-consistency check. After writing all tasks, scan for naming drift. If Task 3 says clearLayers() but Task 7 says clearFullLayers(), that's a bug in the plan. Function names, variable names, and file paths must be consistent across all tasks.
Numbered outputs for long sessions. For multi-phase implementations, write numbered intermediate files to .plan/ (e.g., 01-setup.md, 02-phase1-complete.md) so state survives context compaction. Read from files, not conversation memory, when resuming work after compaction or across sessions.
SHA recording. When a task completes and is committed, note the commit SHA inline: - [x] Task 1.1 \abc1234``. Creates traceability from plan to code.
Deviation documentation. When the implementation deviates from the plan, document why inline: **Deviation**: [what changed and why] under the affected task. Silent deviation breaks trust -- the orchestrator assumes the plan was followed.
No gold-plating. Build exactly what the spec requires. If a feature, enhancement, or "nice-to-have" isn't in the requirements, don't add it. Quote the exact spec requirements in the plan and flag any additions explicitly as scope expansion needing approval. Basic first implementations are acceptable -- most need 2-3 revision cycles anyway.
Every phase must be context-safe:
Decompose by user-visible capability, not by technical layer. "User can log in" is a vertical slice -- it touches UI, API, and DB, and delivers a working feature when done. "Build the auth database schema" is a horizontal slice that delivers zero value until other slices complete.
Vertical slices are independently demonstrable and testable. Each slice should produce something a stakeholder can see, try, or verify. When a phase in a plan delivers only one layer (all models, all controllers, all views), restructure it into slices that cut through all layers for one capability at a time.
After every 2-3 completed tasks, pause and verify: are the completed pieces actually working together? Run tests, check integration points, confirm that data flows end-to-end. This catches drift early instead of discovering at the end that pieces don't fit.
Checkpoints are lightweight -- run the test suite, hit the endpoint, render the component. Not a formal review. The goal is a fast feedback signal: "everything built so far integrates correctly." Document checkpoint results in .plan/progress.md.
Not every decision needs user input. Apply this principle:
Claude decides (technical implementation): language, framework, architecture, libraries, file structure, naming conventions, test strategy, error handling approach, database schema details, API design patterns. Make the call, document the rationale in the plan.
User decides (experience-affecting): scope tradeoffs ("cut X to hit deadline?"), UX choices that change what users see or do, data model decisions that constrain future product options, anything where two valid paths lead to meaningfully different user outcomes.
Heuristic: If the decision changes what the user experiences, ask. If it changes how the code works, decide.
Scale to complexity:
Only ask about decisions that fall in the "user decides" category above. Make reasonable assumptions for everything else.
Write every task as if the implementer has zero context and questionable taste. They cannot infer intent from conversation history -- everything must be in the plan.
Context management rules, error protocol (3-attempt escalation), iterative plan refinement, and the 5-question context check are in operational-patterns.md. Read when starting a multi-phase plan or resuming after a gap.
Plans can carry lightweight metadata per phase that shapes how workflows:work sequences implementation. These are optional annotations, not requirements.
Add posture signals in the phase header: ## Phase 2: Auth middleware [test-first]. The executor inherits these silently without interrupting questions -- they shape sequencing, not scope.
When asked to "deepen" or "strengthen" an existing plan, don't re-run the full planning workflow. Instead:
Deepening is additive -- it fills gaps without restructuring what already works. The /deepen-plan command orchestrates this with parallel research agents per section.
| Don't | Do Instead |
|---|---|
| Start coding without a plan | Create .plan/task_plan.md first |
| Plan horizontal layers (all DB, then all API, then all UI) | Vertical slices: one complete feature path per phase (DB + API + UI) delivering working end-to-end functionality |
| State goals once and forget | Re-read plan before decisions |
| Hide errors and retry silently | Log errors, mutate approach |
| Keep everything in context | Write large content to files |
| Repeat failed actions | Track attempts in plan file |
| Create vague tasks ("improve X") | Concrete verb-first tasks with file paths |
| Plan at 100% capacity | Budget for verification, fixes, and unknowns |
.plan/task_plan.md (or docs/plans/ for formal plans)workflows:plan is the structured workflow (research agents, issue templates). Use this skill's principles during any planning; use workflows:plan for full feature plans./adr to document the decision and what was given up. ADRs outlive the plan.brainstorming -- use first when requirements are ambiguous. When a brainstorm spec exists (docs/brainstorms/), use it as input and skip idea refinementwriting -- use to humanize plan language and remove AI slop from plan documentsworkflows:work or execute inline