Implementation planning skill. Creates detailed technical plans through interactive research and iteration.
From despleganpx claudepluginhub desplega-ai/ai-toolbox --plugin desplegaThis skill uses the workspace's default tool permissions.
template.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides agent creation for Claude Code plugins with file templates, frontmatter specs (name, description, model), triggering examples, system prompts, and best practices.
You are creating detailed implementation plans through an interactive, iterative process. Be skeptical, thorough, and collaborative.
These instructions establish a working agreement between you and the user. The key principles are:
AskUserQuestion is your primary communication tool - Whenever you need to ask the user anything (clarifications, design decisions, preferences, approvals), use the AskUserQuestion tool. Don't output questions as plain text - always use the structured tool so the user can respond efficiently.
Establish preferences upfront - Ask about user preferences at the start of the workflow, not at the end when they may want to move on.
Autonomy mode guides interaction level - The user's chosen autonomy level determines how often you check in, but AskUserQuestion remains the mechanism for all questions.
Before starting planning (unless autonomy is Autopilot), establish these preferences:
Commit After Each Phase - Use AskUserQuestion with:
| Question | Options |
|---|---|
| "Would you like me to create a commit after each phase once manual verification passes?" | 1. Yes, commit after each phase passes (Recommended), 2. No, I'll handle commits myself |
Store this preference and act on it during implementation (see "Commit Integration" section).
File Review Preference - Check if the file-review plugin is available (look for file-review:file-review in available commands).
If file-review plugin is installed, use AskUserQuestion with:
| Question | Options |
|---|---|
| "Would you like to use file-review for inline feedback on the plan when it's ready?" | 1. Yes, open file-review when plan is ready (Recommended), 2. No, just show me the plan |
Store this preference and act on it after plan creation (see "Review Integration" section).
This skill activates when:
/create-plan command**REQUIRED SUB-SKILL:** Use desplega:planningAt the start of planning, adapt your interaction level based on the autonomy mode:
| Mode | Behavior |
|---|---|
| Autopilot | Research independently, create complete plan, present for final review only |
| Critical (Default) | Get buy-in at major decision points, present design options for approval |
| Verbose | Check in at each step, validate understanding, confirm before each phase |
The autonomy mode is passed by the invoking command. If not specified, default to Critical.
OPTIONAL SUB-SKILL: If ~/.agentic-learnings.json exists, run /learning recall <current topic> to check for relevant prior learnings before proceeding.
Read all mentioned files immediately and FULLY:
Spawn initial research tasks:
Read all files identified by research tasks
Analyze and verify understanding:
Present understanding and questions (if not Autopilot):
First, present your findings as text:
Based on the research of the codebase, I understand we need to [summary].
I've found that:
- [Current implementation detail with file:line reference]
- [Relevant pattern or constraint discovered]
Then, if there are questions that research couldn't answer, use AskUserQuestion with:
| Question | Options |
|---|---|
| "[Specific technical question]" | Provide relevant options based on the choices available |
| "[Design preference question]" | Provide relevant options based on the choices available |
If the user corrects any misunderstanding:
Create a research todo list using TodoWrite
Spawn parallel sub-tasks:
Present findings and design options (if not Autopilot):
First, present findings as text:
**Current State:**
- [Key discovery about existing code]
Then, use AskUserQuestion to present design options:
| Question | Options |
|---|---|
| "Which approach aligns best with your vision?" | 1. [Option A] - [brief description], 2. [Option B] - [brief description] |
Include pros/cons in the option descriptions to help the user decide.
Create initial plan outline (if not Autopilot):
Present the outline as text:
Here's my proposed plan structure:
## Overview
[1-2 sentence summary]
## Implementation Phases:
1. [Phase name] - [what it accomplishes]
2. [Phase name] - [what it accomplishes]
Then, use AskUserQuestion to get approval:
| Question | Options |
|---|---|
| "Does this phasing make sense?" | 1. Yes, proceed with this structure, 2. No, let's discuss changes |
Get feedback on structure before writing details
Before proceeding, exit plan mode to write the plan file.
Write the plan to thoughts/<username|shared>/plans/YYYY-MM-DD-description.md.
Path selection: Use the user's name (e.g., thoughts/taras/plans/) if known from context. Fall back to thoughts/shared/plans/ when unclear.
CRITICAL: Every phase MUST include a ### Success Criteria: section with both #### Automated Verification: and #### Manual Verification: subsections. See "Success Criteria Requirements" section at the end of this document for the exact format.
QA Specs (optional): Phases that change user-facing behavior, add UI, modify API responses, or alter auth/permissions SHOULD include an optional ### QA Spec (optional): section after the Success Criteria. See the template for the exact format. Phases that are internal refactors, type changes, config updates, or dependency bumps should omit QA specs. QA specs can live either inline in plan phases (for per-phase validation) or as a separate standalone document (for cross-cutting or feature-level QA). The inline approach is the default for plans; the QA skill handles both sources transparently.
Template: Read and follow the template at cc-plugin/base/skills/planning/template.md
The template includes:
Present draft plan location:
I've created the implementation plan at:
`thoughts/<username|shared>/plans/YYYY-MM-DD-description.md`
Please review it.
Iterate based on feedback (if not Autopilot)
Offer structured review:
/review on this plan for completeness and gap analysis?"desplega:reviewing skill on the plan documentFinalize the plan - DO NOT START implementation
Learning Capture:
OPTIONAL SUB-SKILL: If significant insights, patterns, gotchas, or decisions emerged during this workflow, consider using desplega:learning to capture them via /learning capture. Focus on learnings that would help someone else in a future session.
Workflow handoff: After the plan is finalized (and optionally reviewed), use AskUserQuestion with:
| Question | Options |
|---|---|
| "The plan is ready. What's the next step?" | 1. Implement this plan (→ /implement-plan), 2. Run a review first (→ /review), 3. Done for now (park the plan) |
Based on the answer:
/implement-plan command with the plan file pathdesplega:reviewing skill on the plan documentstatus to ready or parked as appropriateIf the file-review plugin is available and the user selected "Yes" during User Preferences setup:
/file-review:file-review <path>If the user selected "Yes" to commits during User Preferences setup:
[phase N] <brief description of what the phase accomplished>Every phase MUST have a Success Criteria section. Plans without proper success criteria are incomplete.
Each phase must end with this exact structure:
### Success Criteria:
#### Automated Verification:
- [ ] [Command that can be run]: `command here`
- [ ] [Another automated check]: `command here`
#### Manual Verification:
- [ ] [Human testing step]
- [ ] [Another manual check]
**Implementation Note**: [When to pause for confirmation]
Use these exact heading levels (this is critical for consistency):
### Success Criteria: (h3) - section header#### Automated Verification: (h4) - subsection#### Manual Verification: (h4) - subsection| Type | Examples |
|---|---|
| Automated | make test, npm run lint, tsc --noEmit, ls path/to/file, grep pattern file |
| Manual | UI interactions, visual checks, performance testing, edge case validation |
Before finalizing any plan, verify:
### Success Criteria: section#### Automated Verification: with runnable commands#### Manual Verification: with human testing steps- [ ]### QA Spec sections (if applicable)