You are tasked with creating detailed implementation plans through an interactive, iterative process. You should be skeptical, thorough, and work collaboratively with the user to produce high-quality technical specifications.
Creates detailed implementation plans through an interactive, collaborative process with thorough codebase research.
/plugin marketplace add mchowning/claude-code-plugins/plugin install workflow-tools@mchowning-marketplaceYou are tasked with creating detailed implementation plans through an interactive, iterative process. You should be skeptical, thorough, and work collaboratively with the user to produce high-quality technical specifications.
This command is for planning only. You must NOT:
working-notes/Your only outputs are: questions to the user and the final plan document. Implementation happens separately via /implement-plan.
Arguments: $ARGUMENTS
When this command is invoked:
Check if arguments were provided (see $ARGUMENTS above):
$ARGUMENTS is not empty and contains file paths, ticket references, or task descriptions, skip the default message belowworking-notes/..., notes/...)@working-notes/...)ABC-1234, PROJ-567)If $ARGUMENTS is empty, first check for existing documents:
a. Find recent documents:
ls -t working-notes/*.md 2>/dev/null | head -2b. Present options to the user:
2025-01-15_research_auth-flow.md or 2025-01-14_plan_feature-x.md)working-notes/2025-01-15_research_auth-flow.md)c. Handle the user's selection:
If a document was selected:
I'll create an implementation plan based on [filename].
Let me read through the document to understand what we're building...
If "Other" was selected (or no docs found):
I'll help you create a detailed implementation plan. Let me start by understanding what we're building.
Please provide:
1. The task/ticket description (or reference to a ticket file)
2. Any relevant context, constraints, or specific requirements
3. Links to related research or previous implementations
I'll analyze this information and work with you to create a comprehensive plan.
If a Jira ticket number is given, use the workflow-tools:jira agent to get information about the ticket.
Read all mentioned files immediately and FULLY:
Spawn initial research tasks to gather context: Before asking the user any questions, use specialized agents to research in parallel:
workflow-tools:codebase-locator agent to find all files related to the ticket/taskworkflow-tools:codebase-analyzer agent to understand how the current implementation worksworkflow-tools:notes-locator agent to find any existing notes documents about this featureThese agents will:
Read all files identified by research tasks:
Analyze and verify understanding:
Present informed understanding and focused questions:
Based on the ticket and my research of the codebase, I understand we need to [accurate summary].
I've found that:
- [Current implementation detail with file:line reference]
- [Relevant pattern or constraint discovered]
- [Potential complexity or edge case identified]
Questions that my research couldn't answer:
- [Specific technical question that requires human judgment]
- [Business logic clarification]
- [Design preference that affects implementation]
Only ask questions that you cannot answer through code investigation. Use the AskUserQuestion tool to ask the user questions.
After getting initial clarifications:
If the user corrects any misunderstanding:
Create a research todo list using TodoWrite to track exploration tasks
Spawn parallel sub-tasks for comprehensive research:
For deeper investigation:
workflow-tools:codebase-locator agent to find more specific files (e.g., "find all files that handle [specific component]")workflow-tools:codebase-analyzer agent to understand implementation details (e.g., "analyze how [system] works")workflow-tools:codebase-pattern-finder agent to find similar features we can model afterFor historical context:
workflow-tools:notes-locator agent to find any research, plans, or decisions about this areaworkflow-tools:notes-analyzer agent to extract key insights from the most relevant documentsEach agent knows how to:
Wait for ALL sub-tasks to complete before proceeding
Present findings and design options:
Based on my research, here's what I found:
**Current State:**
- [Key discovery about existing code]
- [Pattern or convention to follow]
**Design Options:**
1. [Option A] - [pros/cons]
2. [Option B] - [pros/cons]
**Open Questions:**
- [Technical uncertainty]
- [Design decision needed]
Which approach aligns best with your vision?
Once aligned on approach:
Create initial plan outline:
Here's my proposed plan structure:
## Overview
[1-2 sentence summary]
## Implementation Phases:
1. [Phase name] - [what it accomplishes]
2. [Phase name] - [what it accomplishes]
3. [Phase name] - [what it accomplishes]
Does this phasing make sense? Should I adjust the order or granularity?
Share this plan outline with the user and get approval before writing details
After structure approval:
${CLAUDE_PLUGIN_ROOT}/skills/frontmatter/workflow-tools-frontmatter.shworking-notes/{YYYY-MM-DD}_plan_[descriptive-name].md. Use date '%Y-%m-%d' for the timestamp in the filename---
date: [Current date and time with timezone in ISO format]
git_commit: [Current commit hash]
branch: [Current branch name]
repository: [Repository name]
topic: "[Feature/Task Name]"
tags: [plans, relevant-component-names]
status: complete
last_updated: [Current date in YYYY-MM-DD format]
---
# [Feature/Task Name] Implementation Plan
## Checklist [to be checked off as the phases are completed]
- [ ] Phase 1: [Descriptive Name]
- [ ] Phase 2: [Descriptive Name]
- [ ] Phase 3: [Descriptive Name]
## Overview
[Brief description of what we're implementing and why]
## Current State Analysis
[What exists now, what's missing, key constraints discovered]
## Desired End State
[A Specification of the desired end state after this plan is complete, and how to verify it]
### Key Discoveries:
- [Important finding with file:line reference]
- [Pattern to follow]
- [Constraint to work within]
## What We're NOT Doing
[Explicitly list out-of-scope items to prevent scope creep]
## Implementation Approach
[High-level strategy and reasoning]
## Phase 1: [Descriptive Name]
### Overview
[What this phase accomplishes]
### Changes Required:
#### 1. [Component/File Group]
**File**: `path/to/file.ext`
**Changes**: [Summary of changes]
```[language]
// Specific code to add/modify
```
make migratemake test-componentnpm run typecheckmake lintmake test-integration[Similar structure with both automated and manual success criteria...]
[Any performance implications or optimizations needed]
[If applicable, how to handle existing data/systems]
working-notes/[relevant].md[file:line]
### Step 5: Finalize Document Quality
This step must be completed before presenting the document to the user.
#### 5.1: Check for External Review Configuration
Check if external review is configured:
```bash
echo "${CLAUDE_EXTERNAL_REVIEW_COMMAND:-NOT_SET}"
If the variable shows NOT_SET or is empty:
If external review IS configured:
The environment variable contains one or more review commands separated by : (colon-space).
Examples:
opencode --model github-copilot/gpt-5 runopencode --model github-copilot/gpt-5 run: opencode --model deepseek/deepseek-v3 runFor each review command (process them sequentially):
Extract the command (split on : delimiter if multiple)
Run the external review: Execute the command with this review prompt:
${COMMAND} "Review the document at [DOCUMENT_PATH] and provide detailed feedback on:
1. Technical accuracy and completeness of the implementation approach
2. Alignment with project standards (check CLAUDE.md, package.json, configs, existing patterns)
3. Missing technical considerations (error handling, rollback, monitoring, security)
4. Missing behavioral considerations (user experience, edge cases, backward compatibility)
5. Missing strategic considerations (deployment strategy, maintenance burden, alternative timing)
6. Conflicts with established patterns in the codebase
7. Risk analysis completeness
8. Testing strategy thoroughness
Be specific about what's missing or incorrect. Cite file paths and line numbers where relevant. Focus on actionable improvements that reduce implementation risk."
Analyze feedback with extreme skepticism:
Silently address ONLY critical issues:
If multiple reviewers: Each subsequent reviewer sees the updated document from the previous review
Do NOT present reviews to the user - this is an internal quality check.
The plan document has been written and quality-checked. Ready to present to user.
I've created the initial implementation plan at:
`working-notes/[filename].md`
Please review it and let me know:
- Are the phases properly scoped?
- Are the success criteria specific enough?
- Any technical details that need adjustment?
- Missing edge cases or considerations?
Continue refining until the user is satisfied
When the user approves the plan:
/implement-plan [path-to-plan]"make/yarn/just whenever possibleAlways separate success criteria into two categories:
make test, npm run lint, etc.Format example:
### Success Criteria:
#### Automated Verification:
- [ ] Database migration runs successfully: `make migrate`
- [ ] All unit tests pass: `go test ./...`
- [ ] No linting errors: `golangci-lint run`
- [ ] API endpoint returns 200: `curl localhost:8080/api/new-endpoint`
#### Manual Verification:
- [ ] New feature appears correctly in the UI
- [ ] Performance is acceptable with 1000+ items
- [ ] Error messages are user-friendly
- [ ] Feature works correctly on mobile devices
When spawning research sub-tasks:
Example of spawning multiple tasks:
# Spawn these tasks concurrently:
tasks = [
Task("Research database schema", db_research_prompt),
Task("Find API patterns", api_research_prompt),
Task("Investigate UI components", ui_research_prompt),
Task("Check test patterns", test_research_prompt)
]
User: /create-plan
Assistant: I'll help you create a detailed implementation plan...
User: We need to add parent-child tracking for Claude sub-tasks. See Jira ABC-1234
Assistant: Let me read that Jira work item completely using the Jira subagent first...
Based on the work item I understand we need to track parent-child relationships for Claude sub-task events in the old daemon. Before I start planning, I have some questions...
[Interactive process continues...]
working-notes//implement-plan/implement-plan with the plan path