From nbl.superpowers
Generates detailed implementation plans from specs for multi-step tasks before coding, with bite-sized TDD steps, file structure maps, architecture overviews, tech stacks, and task dependencies.
npx claudepluginhub icefrag/nbl-superpowers --plugin nbl.superpowersThis skill uses the workspace's default tool permissions.
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
Creates detailed implementation plans from specs for multi-step tasks before coding, with file structure mapping, bite-sized TDD steps, architecture overview, and tech stack.
Generates detailed implementation plans from specs for multi-step tasks before coding, with file structure, bite-sized TDD steps, architecture, and tech stack.
Generates detailed implementation plans from specs for multi-step tasks before coding, with TDD bite-sized steps, file structure mapping, and task decomposition.
Share bugs, ideas, or general feedback.
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.
Announce at start: "I'm using the writing-plans skill to create the implementation plan."
Context: This should be run in a dedicated worktree (created by brainstorming skill).
Save plans to: docs/nbl/plans/YYYY-MM-DD-<feature-name>.md
If the spec covers multiple independent subsystems, it should have been broken into sub-project specs during brainstorming. If it wasn't, suggest breaking this into separate plans — one per subsystem. Each plan should produce working, testable software on its own.
Before defining tasks, map out which files will be created or modified and what each one is responsible for. This is where decomposition decisions get locked in.
This structure informs the task decomposition. Each task should produce self-contained changes that make sense independently.
Each step is one action (2-5 minutes):
Every plan MUST start with this header:
# [Feature Name] Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use nbl.subagent-driven-development (recommended) or nbl.executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** [One sentence describing what this builds]
**Architecture:** [2-3 sentences about approach]
**Tech Stack:** [Key technologies/libraries]
---
Each task MUST include dependency information for parallel execution planning:
Dependencies: None | Task 1, Task 2, ...
None if task has no dependenciesParallelizable: Yes | No (reason)
Yes - Task can run in parallel with other independent tasksNo (reason) - Task must wait for dependencies, explain why### Task N: [Component Name]
**状态**
- [ ] 任务完成
**Dependencies:** None | Task 1, Task 2
**Parallelizable:** Yes | No (reason if No)
- [ ] **Step 1: Write the failing test**
```python
def test_specific_behavior():
result = function(input)
assert result == expected
```
- [ ] **Step 2: Run test to verify it fails**
Run: `pytest tests/path/test.py::test_name -v`
Expected: FAIL with "function not defined"
- [ ] **Step 3: Write minimal implementation**
```python
def function(input):
return expected
```
- [ ] **Step 4: Run test to verify it passes**
Run: `pytest tests/path/test.py::test_name -v`
Expected: PASS
- [ ] **Step 5: Commit**
```bash
git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"
```
Every step must contain the actual content an engineer needs. These are plan failures — never write them:
After writing the complete plan, look at the spec with fresh eyes and check the plan against it. This is a checklist you run yourself — not a subagent dispatch.
1. Spec coverage: Skim each section/requirement in the spec. Can you point to a task that implements it? List any gaps.
2. Placeholder scan: Search your plan for red flags — any of the patterns from the "No Placeholders" section above. Fix them.
3. Type consistency: Do the types, method signatures, and property names you used in later tasks match what you defined in earlier tasks? A function called clearLayers() in Task 3 but clearFullLayers() in Task 7 is a bug.
If you find issues, fix them inline. No need to re-review — just fix and move on. If you find a spec requirement with no task, add the task.
After writing and self-reviewing the plan, first assess task complexity to determine if inline mode is appropriate. If not, analyze task dependencies to determine the execution mode.
Use inline mode (main agent executes directly) ONLY when the task is unambiguously and definitely simple. Subagents handle complex work that benefits from context isolation. When in doubt, prefer subagents.
Inline 判定:ALL of the following must be true
Mechanical change only: All changes are mechanical, no complex business logic understanding required
No exploration needed: You already know exactly where and how to change when writing the plan, no need to read existing code to understand context
Small scope: Total change touches 1-2 files, <100 lines of code total
Short chain: If there are multiple tasks, the longest dependency chain has ≤3 tasks
Guidelines:
inline when all four conditions above are true# Pseudocode
def determine_execution_mode(plan):
# Step 1: Only use inline if ALL conditions are met
if (plan.is_mechanical_change
and not plan.requires_code_exploration
and plan.total_files_touched <= 2
and plan.total_lines_changed < 100
and max_dependency_chain_length(plan.tasks) <= 3):
return "inline" # Definitely simple, main agent handles directly
# Step 2: All other cases → analyze dependencies for serial/parallel
levels = analyze_dependency_levels(plan.tasks)
if all(len(level) == 1 for level in levels):
return "serial" # Pure chain dependency, sequential subagent execution
else:
return "parallel" # Has parallelizable tasks, parallel subagent execution
✅ Inline (all conditions met):
Task 1: Fix typo in error message
→ All conditions satisfied → inline
Task 1: Remove unused import in UserController
Task 2: Delete unused UserService.getOldMethod()
→ All conditions satisfied → inline
❌ Not inline (any condition fails):
Task 1: Refactor authentication flow
→ Not mechanical change → skip inline
Task 1: Fix bug in caching logic
→ Requires exploration → skip inline
Task 1: Add config → Task 2: Update doc → Task 3: Add test → Task 4: Update integration
→ Chain length 4 (>3) → skip inline
Serial (chain dependency, complex):
Task 1: Define User entity model
Task 2: Create UserRepository with JPA queries
Task 3: Implement UserService with business logic
→ Not all inline conditions → serial
Parallel (multiple independent tasks, complex):
Task 1: Implement User authentication module
Task 2: Implement Order management module
Task 3: Implement Payment processing module
Task 4: Integration testing
→ Multiple independent complex tasks → parallel
Add to plan document footer:
---
**Execution Mode:** inline | serial | parallel
After saving and self-reviewing the plan, assess complexity then analyze task dependencies to determine the recommended execution mode. Then present all three options to the user for selection.
| Mode | Condition | Skill |
|---|---|---|
inline | Low complexity (small change, clear scope, no exploration needed) | nbl.executing-plans |
serial | Tasks form a chain (each depends on previous), complex work | nbl.subagent-driven-development |
parallel | Multiple independent tasks exist, complex work | nbl.parallel-subagent-driven-development |
After determining the recommended mode, present all three options using AskUserQuestion:
"Plan complete and saved to
docs/nbl/plans/<filename>.md. Three execution options:"
- inline — 在当前会话中直接执行,无子代理
- serial — 通过子代理串行执行,任务间有依赖
- parallel — 通过子代理并行执行,任务间独立
The recommended mode (determined by dependency analysis) should be marked as "推荐 (Recommended)".
Inline mode:
nbl.executing-plans skillSerial mode:
nbl.subagent-driven-development skillParallel mode:
nbl.parallel-subagent-driven-development skill