Design exploration with SOW and Spec generation. Use when user mentions 計画して, 設計して, アプローチ検討, 方針決め, planning, design exploration. Do NOT use for codebase investigation without planning intent (use /research instead).
From shipnpx claudepluginhub thkt/dotclaude --plugin toolkitThis skill is limited to using the following tools:
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Calculates TAM/SAM/SOM using top-down, bottom-up, and value theory methodologies for market sizing, revenue estimation, and startup validation.
Deep design exploration. Compare approaches, validate assumptions, generate SOW and Spec.
| Excuse | Counter |
|---|---|
| "Why is obvious, I'll skip it" | Obvious to whom? Unexamined Why produces silent decisions downstream |
| "DA challenge is overkill for this" | Every design has hidden assumptions. DA agent catches what self-review misses |
| "The user's request maps directly to code" | Requests describe symptoms. Name the underlying problem before designing |
Task description from $1, research context, or AskUserQuestion if empty.
| Step | Action | Output |
|---|---|---|
| 0 | Why Discovery | Why Statement (outcome, stakeholders, evidence) |
| 1 | Q&A Clarification | Scope, constraints, risks (if needed) |
| 2 | Codebase exploration | Relevant code, patterns, constraints |
| 3 | Approach generation | ≥2 approaches with trade-offs |
| 4 | Design Challenge | DA agent verdict + actionable revisions |
| 5 | Design composition | Optimal design with traceability |
| 6 | User Review | Approved design (with trade-off rationale) |
| 6.5 | ADR Proposal | (if needed) |
| 7 | SOW Generation | sow.md |
| 8 | Spec Generation | spec.md |
| 9 | sow-spec-reviewer | Score ≥ 90 pass, < 90 fix + re-invoke (max 3) |
| 10 | Task Decomposition | Milestones + TaskCreate + First Move |
Before exploring code or generating approaches, establish the outcome this work must achieve. Why does not emerge unless explicitly demanded.
| Question | Purpose |
|---|---|
| Who needs this? | Stakeholder/user identification |
| What pain exists? | Evidence of the problem (not assumption) |
| What outcome = success? | Measurable result (not deliverable) |
| Why now? | Priority justification |
| What if we don't? | Cost of inaction |
Output: Why Statement
## Why
- For: [who]
- Problem: [evidence-based pain point]
- Outcome: [what success looks like — a result, not a feature]
- Urgency: [why now, not later]
- Inaction cost: [what happens if we skip this]
Gate: Do not proceed to Step 1 until the Why Statement is clear.
If any field is vague or assumed, do not fill in placeholders and move on. Instead, engage the user in back-and-forth dialogue:
This is a wall-bouncing session, not a one-shot question. The goal is to draw out what the user already knows but has not yet articulated.
Read relevant code. Check .claude/workspace/research/ for recent research
output — if a relevant file exists, read it to inherit prior investigation
context. Understand patterns, constraints, architecture, and prior art.
Generate ≥2 distinct approaches from different perspectives:
When the approaches contain independent technical decisions (e.g., framework, state management, API style), present each decision as a separate choice question — 1 question per message, with recommendation and impact on the project. Bundle only decisions that are tightly coupled.
<!-- canonical: rules/core/PRE_TASK_CHECK.md (decomposition thresholds) -->If PRE_TASK_CHECK decomposition thresholds are exceeded (Files ≥ 5, Features ≥
3, Layers ≥ 3), decompose into independent Units. Each Unit gets its own
SOW/Spec and can be implemented separately via /code.
Spawn devils-advocate-design agent (background) against the approaches from
Step 3. The agent collects its own context via Read/Grep/Glob.
Present DA results with verdict table + actionable items. Revise approaches based on findings before proceeding to Step 5.
Compose optimal design from surviving approaches. Work through two perspectives in order:
Technology-independent business logic modeling. Depth varies by context:
| Context | Depth | Focus |
|---|---|---|
| Business app (entities ≥ 3 or business rules ≥ 3) | Detailed | Entities, relationships, invariants, business rules, domain events |
| CLI tool / config / simple UI | Brief | Key data structures and validation rules only |
### Domain Perspective
- Entities: [key data types and their relationships]
- Business Rules: [domain-specific rules and constraints]
- Invariants: [conditions that must always hold]
Translate domain understanding into implementation design:
### Technical Perspective
- Component Architecture: [hierarchy, boundaries, responsibilities]
- State Strategy: [server state vs client state, management approach]
- NFR Application: [performance, security, accessibility patterns]
- Operational Concerns: [error boundaries, logging, loading states]
## Design
### Key Decisions
| Decision | Choice | Rationale |
| -------- | ------ | ----------------------- |
| ... | ... | traces to [perspective] |
### Implementation Sketch
- Files to modify: [list with file:line]
- Files to create: [list with purpose]
- Estimated scope: [lines, files]
### Trade-offs
| Accepted | Rejected | Why |
| ------------------ | ----------------- | ----------- |
| [what we're doing] | [what we gave up] | [rationale] |
Read template templates/sow/template.md. Fill from design context (Steps 0-6).
ID format: AC-N. Output:
.claude/workspace/planning/YYYY-MM-DD-[feature]/sow.md
Quality gates (apply before writing each section):
| Section | Gate |
|---|---|
| Why | 5 fields all filled. Outcome = measurable result, not deliverable |
| AC | Each traces to Why Outcome. No orphan ACs. No scope creep beyond Why Problem |
| Scope | YAGNI checklist items checked with rationale, not just excluded |
| Impl | Files < 5 per Phase. Steps describe concrete changes |
| Test | Every AC has ≥1 test. Verification states what is checked concretely |
| Risks | ≥1 risk identified with mitigation |
Read template templates/spec/template.md. Generate from SOW. ID format:
FR-001, T-001, NFR-001. Traceability: FR-001 Implements: AC-001 →
T-001 Validates: FR-001 If UI-related: include Component API (Props, variants,
states, usage). Output: .claude/workspace/planning/[same-dir]/spec.md
Quality gates (apply before writing each section):
| Section | Gate |
|---|---|
| FR | EARS syntax required. 1 SHALL per sentence. No vague values (appropriate etc) |
| FR | Document rationale for design decisions (variant reuse, YAGNI reasoning etc) |
| Domain | Concept-level only. No type/field names. Invariants trace to FRs |
| Test | Every FR has ≥1 scenario. Concrete values in all Given-When-Then columns |
| NFR | Measurement column specifies how to measure (code review, manual timing etc) |
| Trace | AC → FR → Test → NFR chain unbroken |
After Spec generation, invoke sow-spec-reviewer.
Score ≥ 90 → pass. Score < 90 → fix findings, re-invoke (max 3 loops). After 3 loops, present remaining findings to user and proceed.
After user approves design, ask if an ADR is needed for technical decisions (framework/library selection, architecture patterns, deprecations, trade-off choices). Skip for simple features.
| Principle | Rule |
|---|---|
| Sub-deadlines required | Phase-level milestones with completion criteria |
| Parallel grouping | Never 1 task per phase if parallelizable |
| First move explicit | State which task to start and why |
| Scope cut = leaf only | Core dependency tasks are non-negotiable |
| No urgency panic | Analyze structurally, not reactively |
Cross-session: export CLAUDE_CODE_TASK_LIST_ID="[feature]-tasks" (max 10
tasks)
| Source | subject | description | addBlockedBy |
|---|---|---|---|
| Implementation Plan | Phase N: [description] | Steps + validates AC-XXX | [dependency IDs] |
| Test Plan (HIGH) | Test: [description] | (if complex) | [dependency IDs] |
Before creating tasks, count unique files per Phase in the Implementation Plan. Split any Phase with Files ≥ 5 into independent Units (each gets own SOW/Spec). Repeat until all Phases have Files < 5.
Phase 1 [Day X]: task list (completion criteria)
Phase 2 [Day Y]: ...
→ Task N:[rationale — why this unblocks the most downstream work]
Leaf tasks only. Core dependencies are non-negotiable.
Scope, Priority (MoSCoW), Constraints, Risks.
Purpose, Users, and Success criteria are covered by Step 0 (Why Discovery).
Session ID: ${CLAUDE_SESSION_ID}
Always use this exact path — Write tool creates parent directories if absent.
.claude/workspace/planning/YYYY-MM-DD-[feature]/sow.md and spec.md
All steps must complete: Why established, codebase explored, ≥2 approaches compared, DA challenge applied, design composed, user reviewed, sow.md and spec.md generated, spec review passed, tasks decomposed (milestones + first move + scope cut candidates).