Create TDD implementation plan by exploring codebase, resolving ambiguities, and producing executable task list.
Creates TDD implementation plan by exploring codebase, resolving ambiguities, and producing executable task list.
/plugin marketplace add pproenca/dot-claude/plugin install dev-workflow@pproencaCreate TDD implementation plan by exploring codebase, resolving ambiguities, and producing executable task list.
$ARGUMENTS
If empty: Use AskUserQuestion to ask what feature to plan.
If @file referenced: Read it, use as context.
Dispatch code-explorer in background so you can continue with clarifications:
Task:
subagent_type: dev-workflow:code-explorer
model: opus
description: "Explore codebase"
prompt: |
Survey codebase for [feature]:
1. Similar features - existing implementations to reference
2. Integration points - boundaries and dependencies
3. Testing patterns - conventions and test file locations
Report 10-15 essential files. Limit to 10 tool calls.
run_in_background: true
Store the task_id - you'll collect results after clarifications.
Skip if: Design doc from /dev-workflow:brainstorm exists AND is comprehensive.
Identify underspecified aspects:
Present using AskUserQuestion (one question at a time, 2-4 options each).
Wait for answers before Step 3.
After clarifications, collect exploration results:
TaskOutput:
task_id: explorer_task_id
block: true
Note patterns for Step 3.
For complex features (5+ files), dispatch MULTIPLE code-architects in parallel for different perspectives:
# Launch ALL architects in ONE message for parallel execution
Task:
subagent_type: dev-workflow:code-architect
model: opus
description: "Minimal changes design"
prompt: |
Design architecture for [feature] using exploration context.
Focus: MINIMAL CHANGES
- Smallest diff, maximum reuse of existing code
- Fewest new files
- Trade-off: may be less clean
run_in_background: true
Task:
subagent_type: dev-workflow:code-architect
model: opus
description: "Clean architecture design"
prompt: |
Design architecture for [feature] using exploration context.
Focus: CLEAN ARCHITECTURE
- Best maintainability and testability
- Clear separation of concerns
- Trade-off: may require more changes
run_in_background: true
Task:
subagent_type: dev-workflow:code-architect
model: opus
description: "Pragmatic balance design"
prompt: |
Design architecture for [feature] using exploration context.
Focus: PRAGMATIC BALANCE
- Balance between clean code and minimal changes
- Follow existing patterns where sensible
- Trade-off: compromise approach
run_in_background: true
Collect all results:
TaskOutput:
task_id: minimal_architect_id
block: true
TaskOutput:
task_id: clean_architect_id
block: true
TaskOutput:
task_id: pragmatic_architect_id
block: true
Synthesize the three perspectives and present trade-offs to user via AskUserQuestion.
CRITICAL: Save to docs/plans/YYYY-MM-DD-<feature-name>.md, NOT to .claude/plans/.
# [Feature Name] Implementation Plan
> **Execution:** Use `/dev-workflow:execute-plan docs/plans/[this-file].md` to implement task-by-task.
**Goal:** [One sentence describing what this builds]
**Architecture:** [2-3 sentences from Step 3]
**Tech Stack:** [Key technologies, libraries, frameworks]
---
Each task MUST be self-contained with bite-sized steps.
The Golden Rule:
Write plans assuming the executor has zero context and questionable taste.
They are skilled developers who know almost nothing about:
They will take shortcuts if the plan allows it.
Step Granularity: 2-5 minutes each
Each "Step N:" is ONE action:
Why 2-5 minutes matters:
### Task N: [Component Name]
**Files:**
- Create: `exact/path/to/new.py`
- Modify: `exact/path/to/existing.py:50-75`
- Test: `tests/exact/path/test_file.py`
**Step 1: Write the failing test** (2-5 min)
```python
def test_specific_behavior():
# Arrange
input_data = {"key": "value"}
# Act
result = function_name(input_data)
# Assert
assert result == expected_output
```
**Step 2: Run test to verify it fails** (30 sec)
```bash
pytest tests/exact/path/test_file.py::test_specific_behavior -v
```
Expected: FAIL with `[specific error, e.g., "NameError: name 'function_name' is not defined"]`
**Step 3: Write minimal implementation** (2-5 min)
```python
def function_name(input_data):
# Minimal code to make test pass
return expected_output
```
**Step 4: Run test to verify it passes** (30 sec)
```bash
pytest tests/exact/path/test_file.py::test_specific_behavior -v
```
Expected: PASS (1 passed)
**Step 5: Commit** (30 sec)
```bash
git add tests/exact/path/test_file.py src/exact/path/module.py
git commit -m "feat(scope): add specific_behavior"
```
What to Document (zero context assumption):
| Item | Reason |
|---|---|
| Exact file paths | Not "the auth file" |
| Complete code snippets | Not "add validation" |
| Specific test targets | ::test_name prevents wrong tests |
| Expected output | Proves test tests the right thing |
| Commit message | Prevents "update" commits |
Tasks with NO file overlap execute in parallel (3-5 per group):
| Task Group | Tasks | Rationale |
|---|---|---|
| Group 1 | 1, 2, 3 | Independent modules, no file overlap |
| Group 2 | 4, 5 | Both touch shared types, must be serial |
| Group 3 | 6, 7, 8 | Independent tests, no file overlap |
Always include "Code Review" as the final task.
mkdir -p docs/plans
# Write plan content to docs/plans/YYYY-MM-DD-<feature>.md
git add docs/plans/
git commit -m "docs: implementation plan for <feature>"
Use AskUserQuestion:
AskUserQuestion:
header: "Execute"
question: "Plan saved. How to proceed?"
multiSelect: false
options:
- label: "Execute now"
description: "Run /dev-workflow:execute-plan immediately"
- label: "Later"
description: "Execute manually when ready"
- label: "Revise plan"
description: "Adjust the plan first"
If "Execute now":
/dev-workflow:execute-plan docs/plans/[filename].md
If "Later":
Report the plan file path for manual execution.
If "Revise plan":
Wait for feedback, update plan, return to Step 4.
Before finalizing any plan, verify:
::test_name targets