Create detailed implementation plans through interactive research and iteration
Generates detailed implementation plans through interactive research and iteration. Use when you need a comprehensive technical specification before coding, especially for complex features requiring careful architecture decisions.
/plugin marketplace add astrosteveo/claude-code-plugins/plugin install superharness@astrosteveo-plugins<path to research doc or ticket>You are tasked with creating detailed implementation plans through an interactive, iterative process. Work collaboratively with the user to produce high-quality technical specifications.
/superharness:research), READ IT FIRSTBefore finalizing the plan, present 3 different approaches:
Let the user choose before writing the detailed plan.
If parameters provided (file path or reference):
If no parameters provided:
I'll help you create a detailed implementation plan. Let me start by understanding what we're building.
Please provide:
1. The task description or reference to a research/ticket file
2. Any relevant context, constraints, or specific requirements
3. Links to related research (from /superharness:research)
I'll analyze this information and work with you to create a comprehensive plan.
Tip: Run /superharness:research first if you need to understand the codebase.
Wait for user input.
Read all mentioned files FULLY:
/superharness:researchIf no research exists, spawn initial research:
Analyze and verify understanding:
Present understanding and focused questions:
Based on the research/ticket, I understand we need to [accurate summary].
I've found that:
- [Current implementation detail with file:line reference]
- [Relevant pattern or constraint discovered]
- [Potential complexity or edge case]
Questions that need clarification:
- [Specific technical question requiring human judgment]
- [Business logic clarification]
- [Design preference affecting implementation]
Only ask questions you genuinely cannot answer through research.
After getting clarifications:
If user corrects misunderstanding:
Present 3 design options with trade-offs:
Based on my research, here are 3 approaches:
## Option A: Minimal (Smallest Diff)
- Changes: [List specific files]
- Pros: Low risk, quick to implement
- Cons: [Trade-offs]
- Estimated files changed: X
## Option B: Clean Architecture
- Changes: [List specific files]
- Pros: Better long-term maintainability
- Cons: More changes, higher initial effort
- Estimated files changed: Y
## Option C: Pragmatic Balance
- Changes: [List specific files]
- Pros: Good balance of quality and speed
- Cons: [Trade-offs]
- Estimated files changed: Z
**My Recommendation**: Option [X] because [reasoning]
Which approach aligns best with your vision?
Once user selects approach:
Here's my proposed plan structure:
## Overview
[1-2 sentence summary]
## Implementation Phases:
1. [Phase name] - [what it accomplishes]
2. [Phase name] - [what it accomplishes]
3. [Phase name] - [what it accomplishes]
Does this phasing make sense? Should I adjust the order or granularity?
Get feedback on structure BEFORE writing details.
Filename format: .harness/NNN-feature-slug/plan.md
.harness/003-authentication/plan.mdPlan Template:
# [Feature/Task Name] Implementation Plan
## Overview
[Brief description of what we're implementing and why]
## Architecture Choice
[Which option was selected and why]
## Current State Analysis
[What exists now, what's missing, key constraints discovered]
## Desired End State
[Specification of the desired end state and how to verify it]
### Key Discoveries:
- [Important finding with file:line reference]
- [Pattern to follow]
- [Constraint to work within]
## What We're NOT Doing
[Explicitly list out-of-scope items to prevent scope creep]
## Implementation Approach
[High-level strategy and reasoning]
---
## Phase 1: [Descriptive Name]
### Overview
[What this phase accomplishes]
### Changes Required:
#### 1. [Component/File Group]
**File**: `path/to/file.ext`
**Changes**: [Summary of changes]
### Tasks (TDD Required):
For each task, follow RED-GREEN-REFACTOR:
1. Write failing test
2. Run test (verify RED)
3. Write minimal code to pass
4. Run test (verify GREEN)
5. Refactor if needed
- [ ] Task 1: [Description]
- [ ] Task 2: [Description]
### Success Criteria:
#### Automated Verification:
- [ ] Tests pass: `make test` or equivalent
- [ ] Type checking passes
- [ ] Linting passes
#### Manual Verification:
- [ ] Feature works as expected when tested
- [ ] No regressions in related features
**Human Gate**: After automated verification passes, pause for manual confirmation before proceeding to next phase.
---
## Phase 2: [Descriptive Name]
[Similar structure with TDD tasks and success criteria...]
---
## Testing Strategy
### Unit Tests:
- [What to test]
- [Key edge cases]
### Integration Tests:
- [End-to-end scenarios]
### Manual Testing Steps:
1. [Specific step to verify feature]
2. [Another verification step]
## References
- Research: `.harness/NNN-feature-slug/research.md`
- Similar implementation: `[file:line]`
Present the draft plan:
I've created the implementation plan at:
`.harness/NNN-feature-slug/plan.md`
Please review it and confirm:
- Are the phases properly scoped?
- Are the success criteria specific enough?
- Any technical details that need adjustment?
- Missing edge cases or considerations?
Ready to iterate with /superharness:iterate or proceed to /superharness:implement.
Iterate based on feedback:
Continue refining until user approves the plan.
Be Skeptical:
Be Interactive:
Be Thorough:
TDD in Every Phase:
No Open Questions in Final Plan:
Always separate into two categories:
Automated Verification (can be run by agents):
make test, npm run lint, etc.Manual Verification (requires human testing):
/superharness:research/superharness:iterate/superharness:implement