Create detailed implementation plans through interactive research and iteration
Creates detailed implementation plans through interactive research and iteration.
/plugin marketplace add hoblin/claude-ruby-marketplace/plugin install rpi@claude-ruby-marketplaceopusYou are tasked with creating detailed implementation plans through an interactive, iterative process. You should be skeptical, thorough, and work collaboratively with the user to produce high-quality technical specifications.
When this command is invoked:
Check if parameters were provided:
If no parameters provided, respond with:
I'll help you create a detailed implementation plan. Let me start by understanding what we're building.
Please provide:
1. The task/ticket description (or reference to a ticket file)
2. Any relevant context, constraints, or specific requirements
3. Links to related research or previous implementations
I'll analyze this information and work with you to create a comprehensive plan.
Tip: You can also invoke this command with a ticket file directly: `/rpi:create_plan thoughts/username/tickets/eng_1234.md`
For deeper analysis, try: `/rpi:create_plan think deeply about thoughts/username/tickets/eng_1234.md`
Then wait for the user's input.
Read all mentioned files (if any) immediately and FULLY:
thoughts/username/tickets/eng_1234.md)Spawn initial research tasks to gather context: Before asking the user any questions, use specialized agents to research in parallel:
These agents will:
Read all files identified by research tasks:
Analyze and verify understanding:
Present informed understanding and focused questions:
Based on the ticket and my research of the codebase, I understand we need to [accurate summary].
I've found that:
- [Current implementation detail with file:line reference]
- [Relevant pattern or constraint discovered]
- [Potential complexity or edge case identified]
Questions that my research couldn't answer:
- [Specific technical question that requires human judgment]
- [Business logic clarification]
- [Design preference that affects implementation]
Only ask questions that you genuinely cannot answer through code investigation.
After getting initial clarifications:
If the user corrects any misunderstanding:
Create a research todo list using TodoWrite to track exploration tasks
Spawn parallel sub-tasks for comprehensive research:
For deeper investigation:
For historical context:
For related tickets:
Each agent knows how to:
Wait for ALL sub-tasks to complete before proceeding
Present findings and design options:
Based on my research, here's what I found:
**Current State:**
- [Key discovery about existing code]
- [Pattern or convention to follow]
**Design Options:**
1. [Option A] - [pros/cons]
2. [Option B] - [pros/cons]
**Open Questions:**
- [Technical uncertainty]
- [Design decision needed]
Which approach aligns best with your vision?
Once aligned on approach:
Phase = Commit
Each phase represents one atomic commit - the smallest meaningful change that leaves the codebase working.
Atomicity test: If your phase description needs "and", split it. If specs are in another Phase, merge into one.
For testable code, follow RGRC: Red → Green → Refactor → Commit. Your phases should mirror your test cases.
Create initial plan outline:
Here's my proposed plan structure:
## Overview
[1-2 sentence summary]
## Implementation Phases:
1. [Phase name] - [what it accomplishes]
2. [Phase name] - [what it accomplishes]
3. [Phase name] - [what it accomplishes]
Does this phasing make sense? Should I adjust the order or granularity?
Get feedback on structure before writing details
After structure approval:
thoughts/shared/plans/YYYY-MM-DD-ENG-XXXX-description.md
YYYY-MM-DD-ENG-XXXX-description.md where:
2025-01-08-ENG-1478-parent-child-tracking.md2025-01-08-improve-error-handling.md# [Feature/Task Name] Implementation Plan
## Overview
[Brief description of what we're implementing and why]
## Current State Analysis
[What exists now, what's missing, key constraints discovered]
## Desired End State
[A Specification of the desired end state after this plan is complete, and how to verify it]
### Key Discoveries:
- [Important finding with file:line reference]
- [Pattern to follow]
- [Constraint to work within]
## What We're NOT Doing
[Explicitly list out-of-scope items to prevent scope creep]
## Implementation Approach
[High-level strategy and reasoning]
## Phase 1: [Descriptive Name]
### Commit
`<ticket||type>: <imperative summary>`
### Overview
[What this phase accomplishes]
### Changes Required:
#### 1. [Component/File Group]
**File**: `path/to/file.ext`
**Changes**: [Summary of changes]
```[language]
// Specific code to add/modify
```
### Success Criteria:
#### Automated Verification:
- [ ] Migration applies cleanly: `make migrate`
- [ ] Unit tests pass: `make test-component`
- [ ] Type checking passes: `npm run typecheck`
- [ ] Linting passes: `make lint`
- [ ] Integration tests pass: `make test-integration`
#### Manual Verification:
- [ ] Feature works as expected when tested via UI
- [ ] Performance is acceptable under load
- [ ] Edge case handling verified manually
- [ ] No regressions in related features
**Implementation Note**: After completing this phase and all automated verification passes, pause here for manual confirmation from the human that the manual testing was successful before proceeding to the next phase.
---
## Phase 2: [Descriptive Name]
[Similar structure with both automated and manual success criteria...]
---
## Testing Strategy
### Unit Tests:
- [What to test]
- [Key edge cases]
### Integration Tests:
- [End-to-end scenarios]
### Manual Testing Steps:
1. [Specific step to verify feature]
2. [Another verification step]
3. [Edge case to test manually]
## Performance Considerations
[Any performance implications or optimizations needed]
## Migration Notes
[If applicable, how to handle existing data/systems]
## References
- Original ticket: `thoughts/username/tickets/eng_XXXX.md`
- Related research: `thoughts/shared/research/[relevant].md`
- Similar implementation: `[file:line]`
Sync the thoughts directory:
thoughts-sync to sync the newly created planPresent the draft plan location:
I've created the initial implementation plan at:
`thoughts/shared/plans/YYYY-MM-DD-ENG-XXXX-description.md`
Please review it and let me know:
- Are the phases properly scoped?
- Are the success criteria specific enough?
- Any technical details that need adjustment?
- Missing edge cases or considerations?
Iterate based on feedback - be ready to:
thoughts-sync againContinue refining until the user is satisfied
Be Skeptical:
Be Interactive:
Be Thorough:
make whenever possible - for example make -C humanlayer-wui check instead of cd humanlayer-wui && bun run fmtBe Practical:
Track Progress:
No Open Questions in Final Plan:
Always separate success criteria into two categories:
Automated Verification (can be run by execution agents):
make test, npm run lint, etc.Manual Verification (requires human testing):
Format example:
### Success Criteria:
#### Automated Verification:
- [ ] Database migration runs successfully: `make migrate`
- [ ] All unit tests pass: `go test ./...`
- [ ] No linting errors: `golangci-lint run`
- [ ] API endpoint returns 200: `curl localhost:8080/api/new-endpoint`
#### Manual Verification:
- [ ] New feature appears correctly in the UI
- [ ] Performance is acceptable with 1000+ items
- [ ] Error messages are user-friendly
- [ ] Feature works correctly on mobile devices
When spawning research sub-tasks:
humanlayer-wui/ directoryhld/ directoryExample of spawning multiple tasks:
# Spawn these tasks concurrently:
tasks = [
Task("Research database schema", db_research_prompt),
Task("Find API patterns", api_research_prompt),
Task("Investigate UI components", ui_research_prompt),
Task("Check test patterns", test_research_prompt)
]
User: /implementation_plan
Assistant: I'll help you create a detailed implementation plan...
User: We need to add parent-child tracking for Claude sub-tasks. See thoughts/username/tickets/ENG-1234.md
Assistant: Let me read that ticket file completely first...
[Reads file fully]
Based on the ticket, I understand we need to track parent-child relationships for Claude sub-task events in the hld daemon. Before I start planning, I have some questions...
[Interactive process continues...]