Generate technical implementation plan (plan.md) from functional requirements specification.
Generates technical implementation plan from functional requirements specification with detailed architecture and phased approach.
/plugin marketplace add davojta/easy-spec/plugin install easy-spec@davojtaGenerate technical implementation plan (plan.md) from functional requirements specification.
Focus on HOW (technical implementation):
Translate requirements to architecture:
Be specific about technology:
Extract from spec.md:
Define technology stack:
**Language/Version**: Node.js 20+, TypeScript 5.7+
**Primary Dependencies**: @mapbox/maps-snapshotter 11.15.1, commander.js
**Storage**: File system (GeoJSON input, PNG output)
**Testing**: Vitest for unit/integration tests
**Target Platform**: macOS, Linux
**Performance Goals**: <500ms for small files, <2s for medium
**Constraints**: ~1500MB memory per instance
**Scale/Scope**: Single CLI tool, batch processing support
ai_tasks/[project]/[feature]/
├── plan.md # This file
├── research.md # Phase 0 technical research
├── spec.md # Input (functional requirements)
└── tasks.md # Phase 2 output (from /tasks command)
Define project layout:
project-root/
├── bin/ # CLI entry point
├── src/
│ ├── service/ # Business logic
│ ├── config/ # Configuration
│ └── types.d.ts # Type definitions
├── config/ # Config files, templates
├── tests/
│ ├── unit/ # Unit tests
│ └── integration/ # Integration tests
└── package.json
Phase 0: Research & Outline
Phase 1: Design & Contracts
Phase 2: Task Planning Approach
Phase 3+: Implementation Roadmap
Organize by category:
Functional Requirements → Technical Implementation:
| Spec (WHAT) | Plan (HOW) |
|---|---|
| FR-001: Accept GeoJSON files | Use fs.readFileSync, validate with JSON.parse |
| FR-009: Calculate bounding box | Implement recursive coordinate extraction |
| FR-020: Use GL Native renderer | @mapbox/maps-snapshotter with setStyleURI |
| FR-026: Support token resolution | CLI flag → env var → AWS Secrets (priority) |
User Scenarios → Test Structure:
| Scenario | Test Implementation |
|---|---|
| Render Point with minimal style | Integration test with fixture point.geojson |
| Handle invalid GeoJSON | Unit test expecting InvalidGeoJSONError |
| Performance < 500ms | Performance test with timing assertions |
Fill each field based on requirements:
Mark "NEEDS CLARIFICATION" if research is needed, then resolve in Phase 0.
If Technical Context has NEEDS CLARIFICATION items:
Identify unknowns:
Research each unknown:
Document in research.md:
**Decision**: Chose [technology/approach]
**Rationale**: [Why this choice]
**Alternatives**: [What else was considered]
Update Technical Context: Replace NEEDS CLARIFICATION with decisions
Data Models:
Test Scenarios:
When using this command, provide:
ai_tasks/another-big-project/ml-automation/feature-snapshotter/)Example usage:
/plan ai_tasks/another-big-project/ml-automation/feature-snapshotter/
With custom prompt:
/plan ai_tasks/data-pipeline/etl-process/ Focus on scalability and error recovery patterns
Generated plan.md should:
❌ Don't include:
✅ Do include:
Now generate the plan.md file following the embedded template structure below, translating functional requirements from spec.md into technical implementation details.
Use this template structure when generating plan.md files:
# Implementation Plan Template for Claude Code
**Created**: [DATE]
**Status**: Draft
**Input**: specification from *spec.md
> Template for creating technical implementation details and plan for the functional requirements for use with Claude Code. Based on analysis of existing plans and Claude Code best practices.
## Project Overview
### Project Name
`[Brief, descriptive name of the project/feature]`
### Problem Statement
`[Clear laconic description of the problem to be solved or feature to be implemented]`
### Solution Summary
`[High-level laconic approach and key outcomes expected]`
---
## Template Requirements
### Execution Flow (main)
## Summary
[Extract from feature spec: primary requirement + technical approach from research]
## Technical Implementation
### Technical Context
**Language/Version**: [e.g., Python 3.11, Swift 5.9, Rust 1.75 or NEEDS CLARIFICATION]
**Primary Dependencies**: [e.g., FastAPI, UIKit, LLVM or NEEDS CLARIFICATION]
**Storage**: [if applicable, e.g., PostgreSQL, CoreData, files or N/A]
**Testing**: [e.g., pytest, XCTest, cargo test or NEEDS CLARIFICATION]
**Target Platform**: [e.g., Linux server, iOS 15+, WASM or NEEDS CLARIFICATION]
**Project Type**: [single/web/mobile - determines source structure]
**Performance Goals**: [domain-specific, e.g., 1000 req/s, 10k lines/sec, 60 fps or NEEDS CLARIFICATION]
**Constraints**: [domain-specific, e.g., <200ms p95, <100MB memory, offline-capable or NEEDS CLARIFICATION]
**Scale/Scope**: [domain-specific, e.g., 10k users, 1M LOC, 50 screens or NEEDS CLARIFICATION]
### Documentation (this feature)
ai_tasks/[###project - one of pipeline, qa-tool, jira, icons-pipeline, routines, another-big-project]/[###-feature]/ ├── plan.md # This file (/plan command output) ├── research.md # Phase 0 output (/plan command) ├── quickstart.md # Phase 1 output (/plan command) └── tasks.md # Phase 2 output (/tasks command - NOT created by /plan)
### Source Code (repository root)
src/ ├── services/ ├── bin/ └── utils/
tests/ └── unit/
## Phase 0: Outline & Research
1. **Extract unknowns from Technical Context** above:
- For each NEEDS CLARIFICATION → research task
- For each dependency → best practices task
- For each integration → patterns task
2. **Generate and dispatch research agents**:
For each unknown in Technical Context: Task: "Research {unknown} for {feature context}" For each technology choice: Task: "Find best practices for {tech} in {domain}"
3. **Consolidate findings** in `research_for_plan.md` using format:
- Decision: [what was chosen]
- Rationale: [why chosen]
- Alternatives considered: [what else evaluated]
**Output**: research.md with all NEEDS CLARIFICATION resolved
## Phase 1: Design & Contracts
*Prerequisites: research.md complete*
1. **Extract entities from feature spec** → `data-model.md`:
- Entity name, fields, relationships
- Validation rules from requirements
- State transitions if applicable
- Make separation for input and output models, outline the intermidiate models if needed
- Use JSON Schema for validation
2**Extract test scenarios** from user stories → `tests.md`:
- Each story → integration test scenario
- Quickstart test = story validation steps
**Output**: data-model.md, tests.md
## Phase 2: Task Planning Approach
**Task Generation Strategy**:
- Load `.specify/templates/tasks-template.md` as base
- Generate tasks from Phase 1 design docs (contracts, data model, quickstart)
- Each contract → contract test task [P]
- Each entity → model creation task [P]
- Each user story → integration test task
- Implementation tasks to make tests pass
**Ordering Strategy**:
- TDD order: Tests before implementation
- Dependency order: Models before services before UI
- Mark [P] for parallel execution (independent files)
**Estimated Output**: 25-30 numbered, ordered tasks in tasks.md
**IMPORTANT**: This phase is executed by the /tasks command, NOT by /plan
## Phase 3+: Future Implementation
*These phases are beyond the scope of the /plan command*
**Phase 3**: Task execution (/tasks command creates tasks.md)
**Phase 4**: Implementation (execute tasks.md following constitutional principles)
**Phase 5**: Validation (run tests, execute quickstart.md, performance validation)
## Complexity Tracking
*Fill ONLY if Constitution Check has violations that must be justified*
| Violation | Why Needed | Simpler Alternative Rejected Because |
|-----------|------------|-------------------------------------|
| [e.g., 4th project] | [current need] | [why 3 projects insufficient] |
| [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] |
## Tasks Breakdown
### Task N: [Task Name]
**Objective:** `[What this task accomplishes]`
**Requirements:**
- `[Specific requirement 1]`
- `[Specific requirement 2]`
- `[Integration points with existing systems]`
- `[Performance/quality criteria]`
**Key Components:**
```language
# Very Laconic pseudo code (or real code) examples or configuration snippets
# Briefly show expected structure and patterns in very laconic form
# Use plain English descriptions that Claude can understand and act on
Files to Create/Modify:``
path/to/file.ext - Description of file purposepath/to/other.ext - Description of what needs implementation
``Project Structure:
project-root/
├── config-files
├── source-directories/
│ ├── module1/
│ └── module2/
└── documentation/
Dependencies:
# Package configuration examples
[dependencies]
key-package = "version"
Environment Setup:
# Commands to set up the project
command-to-install
command-to-configure
[How this integrates with existing systems][APIs or services used][Data flow and dependencies][Configuration management approach][Primary feature working correctly][Error handling implemented][Performance meets requirements][Tests passing][Documentation complete][Code review requirements met][Configuration validated][Monitoring and logging in place][Rollback procedures documented]# Basic usage examples
command --option value
# Advanced usage scenarios
command --complex-option --with-parameters
Expected Output:
Example output showing what success looks like
[Coding standards followed][Security considerations addressed][Performance optimizations implemented][Links to relevant documentation][Related projects or examples][Technical specifications]For Claude Code Users:
Best Practices: