Create detailed implementation plans through an interactive process
Creates detailed implementation plans through an interactive, research-driven process with phased execution.
/plugin marketplace add coalesce-labs/catalyst/plugin install catalyst-dev@catalystinheritThis command uses ticket references like PROJ-123. Replace PROJ with your Linear team's ticket
prefix:
.claude/config.json if availableTICKET-XXXENG-123, FEAT-456, BUG-789You are tasked with creating detailed implementation plans through an interactive, iterative process. You should be skeptical, thorough, and work collaboratively with the user to produce high-quality technical specifications.
Before executing, verify all required tools and systems:
# 1. Validate thoughts system (REQUIRED)
if [[ -f "scripts/validate-thoughts-setup.sh" ]]; then
./scripts/validate-thoughts-setup.sh || exit 1
else
# Inline validation if script not found
if [[ ! -d "thoughts/shared" ]]; then
echo "ā ERROR: Thoughts system not configured"
echo "Run: ./scripts/humanlayer/init-project.sh . {project-name}"
exit 1
fi
fi
# 2. Validate plugin scripts
if [[ -f "${CLAUDE_PLUGIN_ROOT}/scripts/check-prerequisites.sh" ]]; then
"${CLAUDE_PLUGIN_ROOT}/scripts/check-prerequisites.sh" || exit 1
fi
STEP 1: Check for recent research (OPTIONAL)
IMMEDIATELY run this bash script to find recent research that might be relevant:
# Find recent research that might inform this plan
if [[ -f "${CLAUDE_PLUGIN_ROOT}/scripts/workflow-context.sh" ]]; then
RECENT_RESEARCH=$("${CLAUDE_PLUGIN_ROOT}/scripts/workflow-context.sh" recent research)
if [[ -n "$RECENT_RESEARCH" ]]; then
echo "š” Found recent research: $RECENT_RESEARCH"
echo ""
fi
fi
STEP 2: Gather initial input
After checking for research, follow this logic:
If user provided parameters (file path or ticket reference):
If no parameters provided:
I'll help you create a detailed implementation plan. Let me start by understanding what we're building.
Please provide:
1. The task/ticket description (or reference to a ticket file)
2. Any relevant context, constraints, or specific requirements
3. Links to related research or previous implementations
If RECENT_RESEARCH exists, add:
š” I found recent research: $RECENT_RESEARCH
Would you like me to use this as context for the plan?
Continue with:
I'll analyze this information and work with you to create a comprehensive plan.
Tip: You can also invoke this command with a ticket file directly: `/create_plan thoughts/allison/tickets/proj_123.md`
For deeper analysis, try: `/create_plan think deeply about thoughts/allison/tickets/proj_123.md`
Then wait for the user's input.
Read all mentioned files immediately and FULLY:
thoughts/allison/tickets/proj_123.md)Spawn initial research tasks to gather context: Before asking the user any questions, use specialized agents to research in parallel:
These agents will:
Read all files identified by research tasks:
Analyze and verify understanding:
Present informed understanding and focused questions:
Based on the ticket and my research of the codebase, I understand we need to [accurate summary].
I've found that:
- [Current implementation detail with file:line reference]
- [Relevant pattern or constraint discovered]
- [Potential complexity or edge case identified]
Questions that my research couldn't answer:
- [Specific technical question that requires human judgment]
- [Business logic clarification]
- [Design preference that affects implementation]
Only ask questions that you genuinely cannot answer through code investigation.
After getting initial clarifications:
If the user corrects any misunderstanding:
Create a research todo list using TodoWrite to track exploration tasks
Spawn parallel sub-tasks for comprehensive research:
For local codebase:
For external research:
For historical context:
For related tickets:
Each agent knows how to:
Wait for ALL sub-tasks to complete before proceeding
Present findings and design options:
Based on my research, here's what I found:
**Current State:**
- [Key discovery about existing code]
- [Pattern or convention to follow]
**Design Options:**
1. [Option A] - [pros/cons]
2. [Option B] - [pros/cons]
**Open Questions:**
- [Technical uncertainty]
- [Design decision needed]
Which approach aligns best with your vision?
Once aligned on approach:
Create initial plan outline:
Here's my proposed plan structure:
## Overview
[1-2 sentence summary]
## Implementation Phases:
1. [Phase name] - [what it accomplishes]
2. [Phase name] - [what it accomplishes]
3. [Phase name] - [what it accomplishes]
Does this phasing make sense? Should I adjust the order or granularity?
Get feedback on structure before writing details
After structure approval:
thoughts/shared/plans/YYYY-MM-DD-PROJ-XXXX-description.md
YYYY-MM-DD-PROJ-XXXX-description.md where:
2025-01-08-PROJ-123-parent-child-tracking.md2025-01-08-improve-error-handling.md# [Feature/Task Name] Implementation Plan
## Overview
[Brief description of what we're implementing and why]
## Current State Analysis
[What exists now, what's missing, key constraints discovered]
## Desired End State
[A Specification of the desired end state after this plan is complete, and how to verify it]
### Key Discoveries:
- [Important finding with file:line reference]
- [Pattern to follow]
- [Constraint to work within]
## What We're NOT Doing
[Explicitly list out-of-scope items to prevent scope creep]
## Implementation Approach
[High-level strategy and reasoning]
## Phase 1: [Descriptive Name]
### Overview
[What this phase accomplishes]
### Changes Required:
#### 1. [Component/File Group]
**File**: `path/to/file.ext` **Changes**: [Summary of changes]
```[language]
// Specific code to add/modify
```
### Success Criteria:
#### Automated Verification:
- [ ] Migration applies cleanly: `make migrate`
- [ ] Unit tests pass: `make test-component`
- [ ] Type checking passes: `npm run typecheck`
- [ ] Linting passes: `make lint`
- [ ] Integration tests pass: `make test-integration`
#### Manual Verification:
- [ ] Feature works as expected when tested via UI
- [ ] Performance is acceptable under load
- [ ] Edge case handling verified manually
- [ ] No regressions in related features
---
## Phase 2: [Descriptive Name]
[Similar structure with both automated and manual success criteria...]
---
## Testing Strategy
### Unit Tests:
- [What to test]
- [Key edge cases]
### Integration Tests:
- [End-to-end scenarios]
### Manual Testing Steps:
1. [Specific step to verify feature]
2. [Another verification step]
3. [Edge case to test manually]
## Performance Considerations
[Any performance implications or optimizations needed]
## Migration Notes
[If applicable, how to handle existing data/systems]
## References
- Original ticket: `thoughts/allison/tickets/proj_XXXX.md`
- Related research: `thoughts/shared/research/[relevant].md`
- Similar implementation: `[file:line]`
Sync the thoughts directory:
humanlayer thoughts sync to sync the newly created planTrack in Workflow Context:
After saving the plan document, add it to workflow context:
if [[ -f "${CLAUDE_PLUGIN_ROOT}/scripts/workflow-context.sh" ]]; then
"${CLAUDE_PLUGIN_ROOT}/scripts/workflow-context.sh" add plans "$PLAN_FILE" "${TICKET_ID}"
fi
Check context usage and present plan:
Monitor your context and present:
ā
Implementation plan created!
**Plan location**: `thoughts/shared/plans/YYYY-MM-DD-PROJ-XXXX-description.md`
## š Context Status
Current usage: {X}% ({Y}K/{Z}K tokens)
{If >60%}:
ā ļø **Context Alert**: We're at {X}% context usage.
**Recommendation**: Clear context before implementation phase.
**Why?** The implementation phase will:
- Load the complete plan file
- Read multiple source files
- Track progress with TodoWrite
- Benefit from fresh context for optimal performance
**What to do**:
1. ā
Review the plan (read the file above)
2. ā
Close this session (clear context)
3. ā
Start fresh session in worktree
4. ā
Run `/implement-plan {plan-path}`
This is normal! Context is meant to be cleared between phases.
{If <60%}:
ā
Context healthy ({X}%).
---
Please review the plan and let me know:
- Are the phases properly scoped?
- Are the success criteria specific enough?
- Any technical details that need adjustment?
- Missing edge cases or considerations?
Iterate based on feedback - be ready to:
humanlayer thoughts sync againContinue refining until the user is satisfied
Final context check after approval:
Be Skeptical:
Be Interactive:
Be Thorough:
make whenever possible - for example
make -C humanlayer-wui check instead of cd humanlayer-wui && bun run fmtBe Practical:
Track Progress:
No Open Questions in Final Plan:
Always separate success criteria into two categories:
Automated Verification (can be run by execution agents):
make test, npm run lint, etc.Manual Verification (requires human testing):
Format example:
### Success Criteria:
#### Automated Verification:
- [ ] Database migration runs successfully: `make migrate`
- [ ] All unit tests pass: `go test ./...`
- [ ] No linting errors: `golangci-lint run`
- [ ] API endpoint returns 200: `curl localhost:8080/api/new-endpoint`
#### Manual Verification:
- [ ] New feature appears correctly in the UI
- [ ] Performance is acceptable with 1000+ items
- [ ] Error messages are user-friendly
- [ ] Feature works correctly on mobile devices
When spawning research sub-tasks:
humanlayer-wui/ directoryhld/ directoryExample of spawning multiple tasks:
# Spawn these tasks concurrently:
tasks = [
Task("Research database schema", db_research_prompt),
Task("Find API patterns", api_research_prompt),
Task("Investigate UI components", ui_research_prompt),
Task("Check test patterns", test_research_prompt)
]
User: /implementation_plan
Assistant: I'll help you create a detailed implementation plan...
User: We need to add parent-child tracking for Claude sub-tasks. See thoughts/allison/tickets/proj_456.md
Assistant: Let me read that ticket file completely first...
[Reads file fully]
Based on the ticket, I understand we need to track parent-child relationships for Claude sub-task events in the hld daemon. Before I start planning, I have some questions...
[Interactive process continues...]