Create detailed implementation plans from specifications
Creates detailed implementation plans from specifications with task decomposition and risk analysis.
/plugin marketplace add v1truv1us/ai-eng-system/plugin install ai-eng-system@ai-eng-marketplaceCreate a detailed implementation plan for: $ARGUMENTS
Phase 3 of Spec-Driven Workflow: Research → Specify → Plan → Work → Review
Take a deep breath and approach this planning task systematically. Analyze requirements, decompose into atomic tasks, identify dependencies, and create comprehensive implementation strategy.
Poor planning leads to wrong solutions, wasted time, and implementation rework. Incomplete task decomposition causes blocking issues during implementation. Missing dependencies prevent parallel work. This planning task is critical for ensuring smooth, efficient implementation.
I bet you can't decompose requirements into truly atomic tasks that can be executed independently. The challenge is breaking complex features into small, completable units while maintaining correct dependency relationships. Success means every task can be completed independently, dependencies are minimal, and implementation follows predictable path.
# From description
/ai-eng/plan "implement user authentication with JWT"
# From specification
/ai-eng/plan --from-spec=specs/auth/spec.md
# From research
/ai-eng/plan --from-research=docs/research/2026-01-01-auth-patterns.md
# Ralph Wiggum iteration for complex plans
/ai-eng/plan "microservices migration" --ralph --ralph-show-progress
# Ralph Wiggum with custom iterations and quality gate
/ai-eng/plan --from-spec=specs/auth/spec.md --ralph --ralph-max-iterations 8 --ralph-quality-gate="rg 'Depends On:' specs/*/plan.md"
| Option | Description |
|---|---|
--swarm | Use Swarms multi-agent orchestration |
-s, --scope <scope> | Plan scope (architecture|implementation|review|full) [default: full] |
-r, --requirements <reqs...> | List of requirements |
-c, --constraints <constraints...> | List of constraints |
-o, --output <file> | Output plan file [default: generated-plan.yaml] |
--from-spec <file> | Create plan from specification file |
--from-research <file> | Create plan from research document |
-v, --verbose | Enable verbose output |
--ralph | Enable Ralph Wiggum iteration mode for persistent plan refinement |
--ralph-max-iterations <n> | Maximum iterations for Ralph Wiggum mode [default: 10] |
--ralph-completion-promise <text> | Custom completion promise text [default: "Plan is comprehensive and ready for execution"] |
--ralph-quality-gate <command> | Command to run after each iteration for quality validation |
--ralph-stop-on-gate-fail | Stop iterations when quality gate fails [default: continue] |
--ralph-show-progress | Show detailed iteration progress |
--ralph-log-history <file> | Log iteration history to JSON file |
--ralph-verbose | Enable verbose Ralph Wiggum iteration output |
Load skills/prompt-refinement/SKILL.md and use phase: plan to transform your prompt into structured TCRO format (Task, Context, Requirements, Output). If using --from-spec, extract user stories and non-functional requirements from the specification. See templates/plan.md for output structure.
If you delegate discovery to subagents (recommended for large codebases), include a small Context Handoff Envelope in each Task prompt.
Use:
<CONTEXT_HANDOFF_V1>
Goal: (1 sentence)
Scope: (codebase|docs|external|all)
Known constraints: (bullets; optional)
What I already checked: (bullets; optional)
Files/paths to prioritize: (bullets; optional)
Deliverable: (what you must return)
Output format: RESULT_V1
</CONTEXT_HANDOFF_V1>
And require:
<RESULT_V1>
RESULT:
EVIDENCE:
OPEN_QUESTIONS:
NEXT_STEPS:
CONFIDENCE: 0.0-1.0
</RESULT_V1>
Codebase Analysis
Tech Stack Detection
Scope Definition
For each user story in spec:
Example mapping:
**User Story**: US-001 User Registration
→ Task REG-001: Create User database model
→ Task REG-002: Implement registration API endpoint
→ Task REG-003: Add email validation
→ Task REG-004: Implement password hashing
If proceeding without specification:
Break the feature into atomic tasks using this hierarchy:
Epic (the full feature)
└── Phase (logical grouping, ~1 day)
└── Task (atomic unit, ~30 min)
└── Subtask (if task is still too large)
Each atomic task MUST include:
| Field | Description | Example |
|---|---|---|
| ID | Unique identifier | FEAT-001-A |
| Title | Action-oriented name | "Create SessionManager class" |
| Depends On | Blocking task IDs | FEAT-001-B (or "None") |
| Files | Exact files to modify/create | src/context/session.ts |
| Acceptance Criteria | Checkboxes that define "done" | [ ] Class exports correctly |
| Spec Reference | Links to user story/acceptance criteria | US-001: AC-2 |
| Estimated Time | Time box | 30 min |
| Complexity | Low / Medium / High | Medium |
Based on feature type and technical approach, generate:
# Data Model
## Entities
### User
```typescript
{
id: string (UUID, primary key)
email: string (unique, indexed)
password_hash: string (bcrypt)
created_at: timestamp
updated_at: timestamp
}
users_email_unique on (email) for uniquenessusers_created_at for sorting
#### contracts/ (if API involved)
```markdown
# API Contracts
## POST /api/auth/register
**Request:**
```json
{
"email": "user@example.com",
"password": "securePassword123"
}
Response (201 Created):
{
"success": true,
"user_id": "uuid-here"
}
Response (400 Bad Request):
{
"error": "Invalid email format"
}
#### research.md (if technical decisions needed)
Document decisions made during planning:
- Technology choices with rationale
- Trade-offs considered
- Alternatives evaluated
### Phase 6: Risk Assessment
For each phase, identify:
| Risk | Impact | Likelihood | Mitigation |
|------|--------|------------|------------|
| (risk description) | High/Med/Low | High/Med/Low | (strategy) |
### Phase 7: Testing Strategy
Define testing approach for each phase:
- **Unit Tests**: What functions/classes need tests?
- **Integration Tests**: What interactions need verification?
- **Manual Testing**: What scenarios to validate?
- **Regression Checks**: What existing functionality could break?
**Spec-driven validation**: Ensure all spec acceptance criteria have corresponding tests
## Output Format
### Directory: `specs/[feature-name]/`
specs/[feature-name]/ ├── spec.md # From /ai-eng/specify (if exists) ├── plan.md # Implementation plan (this file) ├── tasks.md # Task breakdown (optional separate file) ├── data-model.md # Data schemas (if applicable) ├── research.md # Technical research (if applicable) └── contracts/ # API contracts (if applicable) ├── api-spec.json └── signalr-spec.md
### File: `specs/[feature-name]/plan.md`
```markdown
# [Feature Name] Implementation Plan
**Status**: Draft | In Progress | Complete
**Created**: [date]
**Specification**: specs/[feature-name]/spec.md (if exists)
**Estimated Effort**: [hours/days]
**Complexity**: Low | Medium | High
## Overview
[2-3 sentence summary of technical approach]
## Specification Reference
[If spec exists, summarize user stories and their technical mapping]
### User Stories → Tasks Mapping
| User Story | Tasks | Status |
|-------------|--------|--------|
| US-001 | TASK-001, TASK-002 | Pending |
| US-002 | TASK-003 | Pending |
## Architecture
[Diagram or description of component relationships]
## Phase 1: [Phase Name]
**Goal**: [What this phase accomplishes]
**Duration**: [Estimated time]
### Task 1.1: [Task Title]
- **ID**: FEAT-001-A
- **Depends On**: None
- **User Story**: US-001 (if from spec)
- **Files**:
- `path/to/file.ts` (modify)
- `path/to/new-file.ts` (create)
- **Acceptance Criteria**:
- [ ] Criterion 1
- [ ] Criterion 2
- [ ] Spec AC: [Link to spec acceptance criteria]
- [ ] Tests pass
- **Time**: 30 min
- **Complexity**: Low
### Task 1.2: [Task Title]
[...]
## Phase 2: [Phase Name]
[...]
## Dependencies
- [External dependency 1]
- [Internal dependency 1]
## Risks
| Risk | Impact | Likelihood | Mitigation |
|------|--------|------------|------------|
## Testing Plan
### Unit Tests
- [ ] Test for [component]
### Integration Tests
- [ ] Test [interaction]
### Spec Validation
- [ ] All user stories have corresponding tasks
- [ ] All spec acceptance criteria are covered by task acceptance criteria
- [ ] Non-functional requirements are implemented
## Rollback Plan
[How to revert if something goes wrong]
## References
- [Link to specification] (if exists)
- [Link to research findings]
- [Link to similar implementations]
If tasks.md is generated separately:
# [Feature Name] Tasks
## Task List
### PRIORITY TRACK - Can execute in parallel
- [ ] TASK-001
- [ ] TASK-002
### TRACK - After PRIORITY TRACK completes
- [ ] TASK-003
- [ ] TASK-004
## Task Details
### TASK-001: [Task Title]
**ID**: TASK-001
**User Story**: US-001
**Depends On**: None
**Estimated**: 30 min
**Status**: Pending | In Progress | Complete
#### Acceptance Criteria
- [ ] [Criterion 1]
- [ ] [Criterion 2]
#### Files
- `file1.ts` (create)
- `file2.ts` (modify)
After generating the plan:
/ai-eng/work - Reads plan.md for task executionspecs/[feature]/spec.md - User stories, acceptance criteria, NFRsCLAUDE.md - Project philosophy and constraintsdocs/research/*.md - Optional research context# User provides feature name (spec already exists)
/ai-eng/plan --from-spec=specs/auth
# Step 0: Prompt refinement skill asks planning-specific questions
# Step 1: Loads spec from specs/auth/spec.md
# Step 2: Maps user stories to technical tasks
# Step 3: Generates plan.md, data-model.md, contracts/
# Step 4: Validates spec coverage
# User provides description without spec
/ai-eng/plan "implement JWT-based authentication"
# Step 0: Prompt refinement asks planning questions
# Step 1: Warns about missing spec, offers to proceed
# Step 2: Gathers requirements through clarification
# Step 3: Generates plan.md
When specification exists:
Always cross-reference between artifacts:
Ensure tasks are truly atomic:
Successful planning achieves:
/ai-eng/workAfter planning, execute the plan using:
bun run scripts/run-command.ts plan "$ARGUMENTS" [options]
For example:
bun run scripts/run-command.ts plan "implement auth" --from-spec=specs/auth/spec.md --output=plans/auth.yamlbun run scripts/run-command.ts plan --from-research=docs/research/auth.md --scope=implementationAfter creating the plan, rate your confidence in its completeness and accuracy (0.0-1.0). Identify any uncertainties about task decomposition, missing dependencies, or areas where acceptance criteria may be ambiguous. Note any implementation risks that weren't adequately addressed in the plan.
When --ralph flag is enabled, the planning process follows a persistent refinement cycle:
Iteration Process:
Planning Quality Gate Examples:
# Check task completeness
rg "Acceptance Criteria:" specs/*/plan.md | wc -l
# Validate dependencies mapping
rg "Depends On:" specs/*/plan.md
# Check risk assessment completeness
rg "Impact.*Likelihood.*Mitigation" specs/*/plan.md
Iteration Metrics:
Example Progress Output:
🔄 Ralph Wiggum Planning Iteration 3/10
📝 Tasks: 12 total (+2 this iteration)
🔗 Dependencies: 8 mapped (+1 clarified this iteration)
🛡️ Risk mitigations: 5 complete (+2 this iteration)
🧪 Test coverage: 85% (+5% this iteration)
✅ Quality gate: PASSED
🎯 Plan completeness: 90% (improving)
Planning-Specific Considerations:
Default Settings:
Best Practices:
$ARGUMENTS