Generates step-by-step implementation plans from design specs for multi-step tasks, verifying APIs and dependencies exist before coding, detailing files, code, tests, commits.
npx claudepluginhub gadaalabs/claude-code-on-steroidsThis skill uses the workspace's default tool permissions.
**BLUEPRINT** — *A blueprint is the precise, buildable plan that follows the architect's vision.*
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
BLUEPRINT — A blueprint is the precise, buildable plan that follows the architect's vision. When invoked: translates an approved design spec into step-by-step implementation tasks — which files to touch, what code to write, how to test it, when to commit. Zero ambiguity for the engineer executing the plan.
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.
Announce at start: "Running BLUEPRINT to create the implementation plan."
Context: This should be run in a dedicated worktree (created by architect skill).
Save plans to: docs/superpowers/plans/YYYY-MM-DD-<feature-name>.md
Before mapping files or writing a single task, verify that every API, library method, and framework feature referenced in the spec actually exists.
This is the most common source of plan failure: tasks that reference library.method() that doesn't exist, parameters that were deprecated, or internal utilities at wrong paths. A plan built on phantom APIs fails at execution time, not at review time.
For each external dependency in the spec:
package.json / pyproject.toml / go.modFor each internal utility referenced:
library.method() without verifying it in docs → look it upOptions in order:
Never plan around a phantom API and hope the implementer figures it out.
If the spec covers multiple independent subsystems, it should have been broken into sub-project specs during brainstorming. If it wasn't, suggest breaking this into separate plans — one per subsystem. Each plan should produce working, testable software on its own.
Before defining tasks, map out which files will be created or modified and what each one is responsible for. This is where decomposition decisions get locked in.
This structure informs the task decomposition. Each task should produce self-contained changes that make sense independently.
Each step is one action (2-5 minutes):
Every plan MUST start with this header:
# [Feature Name] Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:phantom (recommended) or superpowers:exodus to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** [One sentence describing what this builds]
**Architecture:** [2-3 sentences about approach]
**Tech Stack:** [Key technologies/libraries]
---
### Task N: [Component Name]
**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`
- [ ] **Step 1: Write the failing test**
```python
def test_specific_behavior():
result = function(input)
assert result == expected
```
- [ ] **Step 2: Run test to verify it fails**
Run: `pytest tests/path/test.py::test_name -v`
Expected: FAIL with "function not defined"
- [ ] **Step 3: Write minimal implementation**
```python
def function(input):
return expected
```
- [ ] **Step 4: Run test to verify it passes**
Run: `pytest tests/path/test.py::test_name -v`
Expected: PASS
- [ ] **Step 5: Commit**
```bash
git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"
```
Every step must contain the actual content an engineer needs. These are plan failures — never write them:
After writing the complete plan, look at the spec with fresh eyes and check the plan against it. This is a checklist you run yourself — not a subagent dispatch.
1. Spec coverage: Skim each section/requirement in the spec. Can you point to a task that implements it? List any gaps.
2. Placeholder scan: Search your plan for red flags — any of the patterns from the "No Placeholders" section above. Fix them.
3. Type consistency: Do the types, method signatures, and property names you used in later tasks match what you defined in earlier tasks? A function called clearLayers() in Task 3 but clearFullLayers() in Task 7 is a bug.
If you find issues, fix them inline. No need to re-review — just fix and move on. If you find a spec requirement with no task, add the task.
For complex implementations, structure the plan using SPARC phases — especially useful when requirements need clarification before coding starts.
SPARC = Specification → Pseudocode → Architecture → Refinement → Coding
SPARC is REQUIRED (not optional) when ANY of:
SPARC is optional (skip to Phase 5) when:
SPARC Plan Header:
# [Feature] Implementation Plan — SPARC Structure
## Phase 1: Specification
- [ ] Document exact inputs, outputs, constraints
- [ ] Define success criteria (measurable)
- [ ] List edge cases and failure modes
- [ ] Confirm with user before proceeding
## Phase 2: Pseudocode
- [ ] Write algorithm in plain language
- [ ] Identify data structures needed
- [ ] Mark decision points and conditionals
- [ ] Review for logical correctness (no code yet)
## Phase 3: Architecture
- [ ] Define file structure and module boundaries
- [ ] Define interfaces/types/contracts
- [ ] Choose libraries/patterns
- [ ] Note what changes existing files
## Phase 4: Refinement
- [ ] Add error handling to pseudocode
- [ ] Add performance considerations
- [ ] Add security review
- [ ] Finalize before coding
## Phase 5: Coding (Tasks)
[Standard task structure from here — TDD steps, exact code, commits]
Gate rule: Each SPARC phase is a checkpoint. Do not advance without completing the current phase. For simple plans (<3 files), skip directly to Phase 5 (standard task structure).
After saving the plan, offer execution choice:
"Plan complete and saved to docs/superpowers/plans/<filename>.md. Two execution options:
1. Subagent-Driven (recommended) - I dispatch a fresh subagent per task, review between tasks, fast iteration
2. Inline Execution - Execute tasks in this session using exodus, batch execution with checkpoints
Which approach?"
If Subagent-Driven chosen:
If Inline Execution chosen: