Create detailed implementation plans through interactive research and iteration
Creates detailed implementation plans through interactive research and iteration.
/plugin marketplace add nstrayer/claude-planning/plugin install bootandshoe@bootandshoe-claude-planningopusYou are tasked with creating detailed implementation plans through an interactive, iterative process. You should be skeptical, thorough, and work collaboratively with the user to produce high-quality technical specifications.
When this command is invoked:
Check if parameters were provided:
If no parameters provided, respond with:
I'll help you create a detailed implementation plan. Let me start by understanding what we're building.
Please provide:
1. The task description (or reference to a requirements file)
2. Any relevant context, constraints, or specific requirements
3. Links to related research or previous implementations
I'll analyze this information and work with you to create a comprehensive plan.
Tip: You can also invoke this command with a task file directly: `/create_plan path/to/requirements.md`
For deeper analysis, try: `/create_plan think deeply about path/to/requirements.md`
Then wait for the user's input.
Read all mentioned files immediately and FULLY:
Spawn initial research tasks to gather context: Before asking the user any questions, use specialized agents to research in parallel:
These agents will:
Read all files identified by research tasks:
Analyze and verify understanding:
Present informed understanding and focused questions:
Based on the task and my research of the codebase, I understand we need to [accurate summary].
I've found that:
- [Current implementation detail with file:line reference]
- [Relevant pattern or constraint discovered]
- [Potential complexity or edge case identified]
Questions that my research couldn't answer:
- [Specific technical question that requires human judgment]
- [Business logic clarification]
- [Design preference that affects implementation]
Only ask questions that you genuinely cannot answer through code investigation.
After getting initial clarifications:
If the user corrects any misunderstanding:
Create a research todo list using TodoWrite to track exploration tasks
Spawn parallel sub-tasks for comprehensive research:
For deeper investigation:
For historical context:
Each agent knows how to:
Wait for ALL sub-tasks to complete before proceeding
During research and planning, proactively suggest web research when:
Unfamiliar Technologies: If the task involves technologies, libraries, or frameworks you're not confident about:
I notice this involves [technology]. Would you like me to search the web for
current best practices for [specific aspect]?
Best Practices Uncertainty: When you're unsure about the recommended approach:
There are multiple ways to implement [feature]. Would you like me to research
current industry best practices for [specific pattern]?
Version-Specific Information: When implementation might depend on specific versions:
The approach may vary based on [library] version. Would you like me to check
the latest documentation for [specific feature]?
Security Considerations: When the feature has security implications:
This involves [sensitive area]. Would you like me to research current security
best practices for [specific concern]?
When suggesting research:
Example:
I see we're implementing webhook signature verification. I have general knowledge
about this, but webhook security practices evolve. Would you like me to search
the web for current best practices for webhook signature verification in 2026?
This could help ensure we're using the most secure approach.
Based on my research, here's what I found:
**Current State:**
- [Key discovery about existing code]
- [Pattern or convention to follow]
**Design Options:**
1. [Option A] - [pros/cons]
2. [Option B] - [pros/cons]
**Open Questions:**
- [Technical uncertainty]
- [Design decision needed]
Which approach aligns best with your vision?
Once aligned on approach:
Create initial plan outline:
Here's my proposed plan structure:
## Overview
[1-2 sentence summary]
## Implementation Phases:
1. [Phase name] - [what it accomplishes]
2. [Phase name] - [what it accomplishes]
3. [Phase name] - [what it accomplishes]
Does this phasing make sense? Should I adjust the order or granularity?
Get feedback on structure before writing details
After structure approval:
thoughts/shared/plans/YYYY-MM-DD-description.md
YYYY-MM-DD-description.md where:
2025-01-08-improve-error-handling.md---
type: implementation-plan
title: "[Feature/Task Name]"
created: YYYY-MM-DD
status: draft
---
# [Feature/Task Name] Implementation Plan
## Overview
[Brief description of what we're implementing and why]
## Current State Analysis
[What exists now, what's missing, key constraints discovered]
## Desired End State
[A Specification of the desired end state after this plan is complete, and how to verify it]
### Key Discoveries:
- [Important finding with file:line reference]
- [Pattern to follow]
- [Constraint to work within]
## What We're NOT Doing
[Explicitly list out-of-scope items to prevent scope creep]
## Implementation Approach
[High-level strategy and reasoning]
## Phase 1: [Descriptive Name]
### Overview
[What this phase accomplishes]
### Changes Required:
#### 1. [Component/File Group]
**File**: `path/to/file.ext`
**Changes**: [Summary of changes]
```[language]
// Specific code to add/modify
```
### Success Criteria:
#### Automated Verification:
- [ ] Migration applies cleanly: `make migrate`
- [ ] Unit tests pass: `make test-component`
- [ ] Type checking passes: `npm run typecheck`
- [ ] Linting passes: `make lint`
- [ ] Integration tests pass: `make test-integration`
#### Manual Verification:
- [ ] Feature works as expected when tested via UI
- [ ] Performance is acceptable under load
- [ ] Edge case handling verified manually
- [ ] No regressions in related features
**Implementation Note**: After completing this phase and all automated verification passes, pause here for manual confirmation from the human that the manual testing was successful before proceeding to the next phase.
---
## Phase 2: [Descriptive Name]
[Similar structure with both automated and manual success criteria...]
---
## Testing Strategy
### Unit Tests:
- [What to test]
- [Key edge cases]
### Integration Tests:
- [End-to-end scenarios]
### Manual Testing Steps:
1. [Specific step to verify feature]
2. [Another verification step]
3. [Edge case to test manually]
## Performance Considerations
[Any performance implications or optimizations needed]
## Migration Notes
[If applicable, how to handle existing data/systems]
## References
- Original requirements: `path/to/requirements.md`
- Related research: `thoughts/shared/research/[relevant].md`
- Similar implementation: `[file:line]`
Present the draft plan location:
I've created the initial implementation plan at:
`thoughts/shared/plans/YYYY-MM-DD-description.md`
Please review it and let me know:
- Are the phases properly scoped?
- Are the success criteria specific enough?
- Any technical details that need adjustment?
- Missing edge cases or considerations?
Iterate based on feedback - be ready to:
Continue refining until the user is satisfied
Be Skeptical:
Be Interactive:
Be Thorough:
make whenever possible - for example make -C bootandshoe-wui check instead of cd bootandshoe-wui && bun run fmtBe Practical:
Track Progress:
No Open Questions in Final Plan:
Always separate success criteria into two categories:
Automated Verification (can be run by execution agents):
make test, npm run lint, etc.Manual Verification (requires human testing):
Format example:
### Success Criteria:
#### Automated Verification:
- [ ] Database migration runs successfully: `make migrate`
- [ ] All unit tests pass: `go test ./...`
- [ ] No linting errors: `golangci-lint run`
- [ ] API endpoint returns 200: `curl localhost:8080/api/new-endpoint`
#### Manual Verification:
- [ ] New feature appears correctly in the UI
- [ ] Performance is acceptable with 1000+ items
- [ ] Error messages are user-friendly
- [ ] Feature works correctly on mobile devices
When spawning research sub-tasks:
Example of spawning multiple tasks:
# Spawn these tasks concurrently:
tasks = [
Task("Research database schema", db_research_prompt),
Task("Find API patterns", api_research_prompt),
Task("Investigate UI components", ui_research_prompt),
Task("Check test patterns", test_research_prompt)
]
User: /create_plan
Assistant: I'll help you create a detailed implementation plan...
User: We need to add parent-child tracking for Claude sub-tasks. See requirements.md
Assistant: Let me read that requirements file completely first...
[Reads file fully]
Based on the requirements, I understand we need to track parent-child relationships for sub-task events. Before I start planning, I have some questions...
[Interactive process continues...]