Creates concise, executable implementation plans for solo developer working with AI agents. Validates assumptions, avoids timelines, focuses on actionable steps with clear human/AI delegation.
Creates executable implementation plans for AI-assisted solo development. Triggers when you request a plan for a technical task, starting with clarifying questions to validate assumptions before generating AI-ready action steps.
/plugin marketplace add NotMyself/Implementation-plans/plugin install notmyself-implementation-plans@NotMyself/Implementation-plansThis skill inherits all available tools. When active, it can use any tool Claude has access to.
ai-delegation-patterns.mdexamples/bad-stakeholder-heavy.mdexamples/good-database-migration.mdexamples/rag-pipeline-example.mdvalidation-checklist.mdThis skill guides creation of implementation plans for solo implementers working with AI agents (not teams requiring approval).
Before writing any plan, ask questions to validate assumptions:
Structure:
## [Task Name]
### Verification Steps (if assumptions can't be validated upfront)
1. Verify [assumption] by [method]
### Implementation
1. Have AI [specific action]
- Context: [relevant details]
- Expected output: [what to verify]
2. Review [output] for [specific concerns]
3. Manually verify [critical check]
### Dependencies
- [What must complete first]
### 🚨 Unvalidated Assumptions
- [List assumptions that need verification during execution]
Keep total plan under 500 words. If you need more detail, split into main plan + technical appendix.
Building multi-step plans on unverified assumptions that collapse when reality doesn't match.
Fix: Front-load verification or flag assumptions explicitly
Declaring root cause without seeing actual data or system state.
Fix: Include data inspection steps before solution steps
Including approval gates, week-based timelines, team alignment meetings.
Fix: Assume work is approved, sequence steps by technical dependency only
For detailed guidance, reference these files:
Validation practices: validation-checklist.md
AI delegation: ai-delegation-patterns.md
Examples: examples/
Load these only when additional context would help create a better plan.
Good step: "Have AI generate migration script adding preferences JSONB column with rollback"
Bad step: "Update the database" (too vague)
Good verification: "Check current schema: SELECT column_name..."
Bad verification: "Ensure database is ready" (unclear how)
Good flag: "🚨 Verify: Azure AI Search tier supports semantic ranking"
Bad flag: Assuming it works without checking
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.