Collaborative design and specification workflow for turning ideas into detailed, implementation-ready designs. Use for feasibility checks ("is X possible?", "can we do Y?"), design exploration, and full specification. Triggers on "let's build", "I want to add", "how should we", "is it possible", "what would it take", "design", "architect", "spec out", or any non-trivial feature request. Skip ONLY for trivial single-location changes (typo, rename, color change).
/plugin marketplace add elertan/planspec/plugin install planspec@planspecThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Transform ideas into validated, detailed design specifications through structured dialogue.
USE this skill when:
SKIP this skill when:
Later phases may reveal issues that invalidate earlier decisions. This is normal—don't force-fit.
When to backtrack:
How to backtrack:
Don't backtrack for:
Sometimes the right answer is "don't build this." This can emerge at any phase.
Signs to consider not building:
How to exit:
Don't confuse "don't build" with:
Goal: Quick assessment before committing to full design. Answer: "Is this viable? What would it take?"
When to use Phase 0:
When to skip Phase 0:
Understand the ask — What are they really asking? What would "yes, feasible" mean?
Quick discovery — Spend minimal time checking for obvious blockers:
Time-box this. 5-10 minutes of investigation, not exhaustive research.
Surface findings — Present one of:
Feasible:
"Yes, this looks feasible. [1-2 sentence reasoning]
Key considerations:
- [consideration 1]
- [consideration 2]
Want me to do a full design?"
Blocked:
"This has a blocking issue: [specific blocker]
[Why it's blocking, not just hard]
Alternatives to consider:
- [alternative 1]
- [alternative 2]"
Needs investigation:
"I can't determine feasibility yet. Need to investigate:
- [specific unknown 1]
- [specific unknown 2]
Want me to dig deeper, or do you have this context?"
Gate decision — Based on user response:
Goal: Know WHY before exploring HOW. Understand actual constraints, not assumed ones.
Read user's request to identify:
[Ask] Confirm scope before deep discovery:
Discover the problem space:
Depth guidance: Focus discovery on what's needed to assess viability. Don't exhaustively map everything—go deep only where uncertainty blocks decisions. You can always revisit later.
a) Internal context:
b) External context (when solution touches external systems):
c) Constraint mapping:
If gaps/blockers found, surface to user immediately—may require pivoting
Resolve uncertainties — For each UNCERTAIN item, ask user ONE at a time:
Do not proceed with unresolved uncertainties. Either resolve them or get explicit user acknowledgment to defer.
Ask ONE additional clarifying question if needed (prefer multiple choice)
Proceed to Phase 2 when:
Goal: Establish what "done" looks like—functional requirements, quality attributes, and boundaries.
Draft success criteria based on Phase 1 understanding—trace what the user actually needs vs. what they asked for
Identify non-functional requirements (as relevant):
Define anti-goals (what we're explicitly NOT building):
Surface constraints:
Present to user for validation:
"Based on our discussion, success means:
**Must have:**
- [criterion 1]
- [criterion 2]
**Quality attributes:**
- [e.g., Response time < 200ms for 95th percentile]
**Not building:**
- [anti-goal 1]
**Constraints:**
- [constraint 1]
Does this capture what you need?"
Refine until user confirms—question whether refinements actually improve or just add complexity
User has confirmed:
Goal: Deeply understand what each approach entails in your specific context, then present informed options.
Brainstorm 2-4 candidate approaches based on Phase 1 understanding
Compatibility Analysis — For each candidate, assess fit with your context.
Scale depth to risk: Low-risk approaches need quick sanity checks. High-risk approaches (new tech, external dependencies, architectural changes) need thorough analysis.
a) Characterize — What's different from current state? Pick relevant lenses:
| Lens | Key questions |
|---|---|
| Performance | Latency? Throughput? At scale? |
| Reliability | Failure modes? Recovery? |
| Dependencies | What must exist first? |
| Expertise | Can the team build and maintain this? |
| Operational | Deploy? Monitor? Debug? Secure? |
| Integration | Clean fit or adapters needed? |
b) Touchpoints — What interacts? (code, infra, people, processes)
c) Find friction — Where do approach characteristics conflict with touchpoint requirements? Only deep-dive on potential mismatches.
d) Classify findings:
Prune and refine:
Evaluate remaining approaches against:
Present approaches with traced-through specifics:
"I see two viable approaches:
**Option A: [name]** (recommended)
[What it is - 1-3 lines]
✓ [Specific advantage with concrete detail]
✗ [Specific tradeoff with concrete impact]
Requires: [dependencies, mitigations, or constraints from analysis]
Works for: [scope, if limited]
**Option B: [name]**
...
I recommend A because [reasoning referencing specific touchpoints and requirements from analysis]. Thoughts?"
Critical: Tradeoffs must be specific and traced, not generic.
/api/search"Discuss until approach is selected—if user raises new considerations, trace them through the analysis
Goal: Validate design incrementally, ensuring enough detail for implementation spec.
Present design ONE section at a time (~200-300 words each)
After each section, ask: "Does this make sense? Any changes?"
Cover these sections (skip if not applicable):
| Section | What to validate | Skip if |
|---|---|---|
| Architecture Overview | Components, interactions, integration points | Never skip |
| Data Model | New/modified structures, state management | No new data structures |
| Interfaces | APIs, contracts, events | No new APIs or contracts |
| Behavior | Happy path, edge cases, error handling, state transitions | Never skip |
| Testing Requirements | Critical paths, edge cases, integration points | Never skip |
For each section, ensure you've captured enough detail that an implementer could:
Revise any section based on feedback—trace how changes ripple to other sections
All applicable sections presented and validated with implementation-ready detail.
Goal: Capture validated design in durable format.
planspec/designs/[topic-slug].md—verify all validated decisions are captured:---
date: YYYY-MM-DD
status: approved
impl-spec: planspec:impl-spec
---
# [Design Title]
> **Next step:** `planspec:impl-spec planspec/designs/[topic].md`
## Problem
[Detailed problem statement - why this matters]
## Success Criteria
[From Phase 2 - what "done" looks like]
**Must have:**
- [criterion]
**Quality attributes:**
- [non-functional requirements]
**Not building:**
- [anti-goals]
## Approach
[Chosen approach and rationale]
### Alternatives Considered
| Alternative | Why Not |
|-------------|---------|
| [Option B] | [Specific reason - cost, complexity, mismatch with constraints] |
| [Option C] | [Specific reason] |
## Design
### Architecture Overview
[How components connect. Include diagram if complex.]
- **Components:** [List new/modified components and their responsibilities]
- **Interactions:** [How they communicate - sync/async, data flow direction]
- **Integration points:** [Where this connects to existing system]
### Data Model
[Skip if no new data structures]
- **New structures:** [Schemas, types, models being introduced]
- **Modified structures:** [Changes to existing data]
- **State management:** [Where state lives, how it's accessed]
### Interfaces
[Skip if no new APIs/contracts]
- **External APIs:** [Endpoints, request/response shapes]
- **Internal interfaces:** [Function signatures, contracts between components]
- **Events/hooks:** [If event-driven, what events are emitted/consumed]
### Behavior
- **Happy path:** [Primary flow from start to end]
- **Edge cases:** [Boundary conditions and how they're handled]
- **Error handling:** [What can fail, how each failure is handled, user-facing messages]
- **State transitions:** [If stateful, what states exist and what triggers transitions]
### Testing Requirements
[What needs to be tested - not how, but what]
- **Critical paths:** [Flows that must work]
- **Edge cases to cover:** [Specific scenarios]
- **Integration points:** [What external interactions need verification]
## Risks
| Risk | Likelihood | Impact | Mitigation |
|------|------------|--------|------------|
| [What could go wrong] | Low/Med/High | [Consequence] | [How we address it] |
## Dependencies
- **Blocking:** [Must exist before we can build]
- **Non-blocking:** [Nice to have, can work around]
## Open Questions
[ONLY items user explicitly deferred. Should be empty in most specs.
If present, note why deferred and what risk was acknowledged.]
Use clear, concise writing (no fluff)—question every sentence for necessity
Open Questions should be rare — If this section has items, each must be:
Many open questions indicate incomplete Phase 1. Consider whether to revisit.
Commit with message: docs: add [topic] design spec
Ask user: "Ready to create the implementation plan?"
If yes, invoke planspec:impl-spec [path-to-design-spec] to generate the implementation plan
CRITICAL: You MUST invoke impl-spec here. Do NOT skip this step and jump directly to implementation, even if the design spec already contains implementation details like code snippets or file contents. The impl-spec creates a structured task breakdown that is a separate, required step in the workflow.
| Phase | Goal | Exit When |
|---|---|---|
| 0. Feasibility | Quick viability check | User has answer, or decides to proceed/not proceed |
| 1. Understand | Know the WHY + verify constraints | Approach viable, blockers surfaced, uncertainties resolved |
| 2. Success | Define done + boundaries | User confirms criteria, NFRs, anti-goals, constraints |
| 3. Explore | Assess approaches, present options | User selects approach, constraints acknowledged |
| 4. Design | Validate implementation-ready detail | All sections confirmed with enough detail to implement |
| 5. Spec | Document | File committed, ready for implementation spec |
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.