Generates executable Markdown implementation plans for multi-step tasks from context briefs, resolving ambiguities, ordering dependencies, and enabling parallel worker execution.
npx claudepluginhub tmdgusya/engineering-discipline --plugin engineering-disciplineThis skill uses the workspace's default tool permissions.
Writes an executable plan document from a clearly defined work scope. Designed so tasks can be spawned as "worker-validator" pairs in parallel.
Creates detailed implementation plans from specs for multi-step tasks before coding, with file structure mapping, bite-sized TDD steps, architecture overview, and tech stack.
Creates executable implementation plans from design folders, decomposing into granular tasks via Superpower Loop phases: structure, decomposition, validation, reflection, git commit.
Generates detailed implementation plans from specs for multi-step tasks before coding, with bite-sized TDD steps, file structure maps, architecture overviews, tech stacks, and task dependencies.
Share bugs, ideas, or general feedback.
Writes an executable plan document from a clearly defined work scope. Designed so tasks can be spawned as "worker-validator" pairs in parallel.
A plan document must be executable by a worker with zero codebase context, without any additional questions. All ambiguity must be resolved at the planning stage.
This skill takes a Context Brief file as input. The Context Brief generated by the clarification skill is used to populate the plan header:
| Context Brief Field | Plan Header Mapping |
|---|---|
| Goal | Goal |
| Scope (In/Out) | Work Scope (included/excluded) |
| Technical Context | Architecture + Tech Stack + basis for file structure mapping |
| Constraints | Reflected as constraints during task decomposition |
| Success Criteria | Used as Self-Review criteria |
| Open Questions | Reflected as assumptions in the plan, then confirmed with the user |
If no Context Brief file exists (user directly requests a plan): confirm essential information (goal, work scope, tech stack) with the user before writing the plan.
docs/engineering-discipline/plans/YYYY-MM-DD-<feature-name>.md
(User preferences for plan location override this default.)
# [Feature Name] Implementation Plan
> **Worker note:** Execute this plan task-by-task using the run-plan skill or subagents. Each step uses checkbox (`- [ ]`) syntax for progress tracking.
**Goal:** [One sentence describing what this plan builds]
**Architecture:** [2-3 sentences about approach]
**Tech Stack:** [Key technologies/libraries]
**Work Scope:**
- **In scope:** [What will be implemented]
- **Out of scope:** [What is explicitly excluded]
---
Before defining tasks, map out which files will be created or modified. Decomposition decisions are locked in at this stage.
Before decomposing tasks, discover the project's highest-level verification capability. This determines the Final Verification Task that closes every plan.
Discovery order (use the first match):
e2e/, tests/e2e/, cypress/, playwright/, test:e2e in package.json, e2e targets in Makefile/Taskfiletests/integration/, integration_test, test:integration scripts.claude/skills/, .claude/agents/, and installed plugins for anything named verify, validate, e2e, or testpytest, jest, go test, cargo test, etc.) with broad coverageIf no meaningful verification exists (level 5 only): Add a Task 0: Create Verification Infrastructure that sets up the minimal verification needed for this plan:
Record the discovery result in the plan header:
**Verification Strategy:**
- **Level:** [e2e | integration | skill/agent | test-suite | build-only]
- **Command:** [exact command to run the verification]
- **What it validates:** [what passing this verification proves]
Before decomposing tasks, also discover project-level agents and skills that workers can leverage:
.claude/agents/ for agents relevant to the task domain (e.g., a test-runner agent, a db-migration agent)build-validator, lint-fixer).claude/skills/ for skills that match task operationsIf useful agents/skills are found, reference them in task steps where applicable:
- [ ] **Step N: Run migration**
Use the project's `db-migration` agent for this step if available.
Run: `<migration command>`
Workers are not required to use discovered agents — they are hints for efficiency. The worker may execute steps directly if the agent is unavailable or unsuitable.
When decomposing tasks, consider the following:
1. Parallelism and Dependencies
Tasks should be designed for maximum parallel execution. However, the following cases require waiting for a predecessor task to complete:
Dependencies are stated in the task header:
### Task N: [Task Name]
**Dependencies:** Runs after Task K completes
**Files:**
- Create: `path/to/file`
- Modify: `path/to/existing-file:line-range`
- Test: `path/to/test-file`
Tasks with no dependencies are marked as parallelizable:
### Task N: [Task Name]
**Dependencies:** None (can run in parallel)
**Files:**
- Create: `path/to/file`
- Test: `path/to/test-file`
2. Worker-Validator Structure
Each task is designed so an independent worker (subagent) can execute it and a separate validator can verify it:
This structure enables spawning multiple tasks simultaneously, each independently verifiable.
3. Task Granularity
Each step is one action (2-5 minutes):
### Task N: [Component Name]
**Dependencies:** [Predecessor task or "None (can run in parallel)"]
**Files:**
- Create: `exact/path/to/file`
- Modify: `exact/path/to/existing-file:123-145`
- Test: `tests/exact/path/to/test-file`
- [ ] **Step 1: Write the failing test**
```python
def test_specific_behavior():
result = function(input)
assert result == expected
```
- [ ] **Step 2: Run test to verify it fails**
Run: `pytest tests/path/test.py::test_name -v`
Expected: FAIL with "function not defined"
- [ ] **Step 3: Write minimal implementation**
```python
def function(input):
return expected
```
- [ ] **Step 4: Run test to verify it passes**
Run: `pytest tests/path/test.py::test_name -v`
Expected: PASS
- [ ] **Step 5: Commit**
```bash
git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"
```
Every plan must end with a Final Verification Task that runs the discovered highest-level verification. This is always the last task, depends on all other tasks, and cannot be parallelized.
### Task N (Final): End-to-End Verification
**Dependencies:** All preceding tasks
**Files:** None (read-only verification)
- [ ] **Step 1: Run highest-level verification**
Run: `[verification command from Verification Strategy]`
Expected: ALL PASS
- [ ] **Step 2: Verify plan success criteria**
Manually check each success criterion from the plan header:
- [ ] [criterion 1]
- [ ] [criterion 2]
- ...
- [ ] **Step 3: Run full test suite for regressions**
Run: `[full test suite command]`
Expected: No regressions — all pre-existing tests still pass
If the final verification fails, the plan is not complete. The worker-validator loop in run-plan will handle failure response (see run-plan's E2E Failure Response Protocol).
Every step must contain the actual content a worker needs. These are plan failures — never write them:
After writing the complete plan, look at the spec with fresh eyes and check the plan against it. This is a checklist you run yourself — not a subagent dispatch.
1. Spec coverage: Skim each section/requirement in the spec. Can you point to a task that implements it? List any gaps.
2. Placeholder scan: Search your plan for red flags — any of the patterns from the "No Placeholders" section above. Fix them.
3. Type consistency: Do the types, method signatures, and property names you used in later tasks match what you defined in earlier tasks? A function called clearLayers() in Task 3 but clearFullLayers() in Task 7 is a bug.
4. Dependency verification: Verify that parallel tasks don't modify the same file. Verify that no dependency chain is missing.
5. Verification coverage: Does the plan include a Final Verification Task? Does it reference the discovered verification command? If no verification was discovered, is there a Task 0 creating verification infrastructure?
If you find issues, fix them inline. No need to re-review — just fix and move on. If a spec requirement has no corresponding task, add the task.
After saving the plan, offer execution choice:
"Plan complete and saved to docs/engineering-discipline/plans/<filename>.md."
"How would you like to proceed?"
1. Subagent execution (recommended) — dispatch a fresh subagent per task, review between tasks, fast iteration
2. Inline execution — execute tasks in this session using the run-plan skill, batch execution with checkpoints
| Anti-Pattern | Why It Fails |
|---|---|
| Marking tasks that modify the same file as parallel | File conflicts, unmergeable changes |
| Listing tasks without dependencies | Execution order tangles, interface mismatches |
| Steps that assume "the worker will figure it out" | Worker's arbitrary interpretation → spec drift |
| Approving a plan with placeholders | Blocked at execution stage, must return to planning |
| Completing a plan without Self-Review | Missing spec coverage, type mismatches, dependency errors go undetected |
Self-check when plan writing is complete:
After plan approval:
run-plan skillclarification skill to resolveThis skill itself does not invoke the next skill. It ends by presenting the plan document and letting the user choose the next step.