From dev-workflow
Use when the user says 'write a plan', 'plan this', 'break this into tasks', or has requirements/specs for a multi-step task before touching code. Creates structured implementation plans with self-contained, verifiable tasks. Not for single-step changes. For phase-driven development, run-phase calls this internally.
npx claudepluginhub n0rvyn/indie-toolkit --plugin dev-workflowThis skill uses the workspace's default tool permissions.
When invoked **standalone** (user runs write-plan directly): this skill automatically chains
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Designs, implements, and audits WCAG 2.2 AA accessible UIs for Web (ARIA/HTML5), iOS (SwiftUI traits), and Android (Compose semantics). Audits code for compliance gaps.
When invoked standalone (user runs write-plan directly): this skill automatically chains
to dev-workflow:verify-plan at Step 4 — plan writing and verification happen in one flow.
When invoked via run-phase orchestration: run-phase writes the plan in main context
using the Plan Writing Reference below, and manages verification as a separate explicit step.
This skill writes an implementation plan directly in the main context, benefiting from full conversation history and user intent.
Before gathering context, search for relevant ADRs, architecture decisions, and known pitfalls:
search(query="<goal text>", source_type=["doc", "error", "lesson"], project_root="<cwd>")Collect the following before writing:
docs/06-plans/*-design.md). If found, read it.docs/06-plans/*-design-analysis.md). If found, read it: it contains validated token mappings, platform translations, and UX assertion validation from a visual prototype.docs/11-crystals/*-crystal.md). If found, read it: it contains settled architectural and UX decisions with machine-readable D-xxx assertions the plan must respect.If any of these are unclear, ask the user before writing.
Read relevant source files (design docs, existing code the plan will touch, crystal files) to understand the current state. Then write the plan following the Plan Writing Reference below.
Save the plan to docs/06-plans/YYYY-MM-DD-<feature-name>-plan.md.
After writing the plan:
## Decisions section of the plan file.
blocking decision: present to user via AskUserQuestion with options from the decision pointrecommended decisions: present as a group via a single AskUserQuestion. Critical: all DP content must be inside the question field — text printed before AskUserQuestion gets visually covered by the question widget. Read each recommended DP's full block (heading + Context + Options + Recommendation) from the source file and concatenate them verbatim in the question field, separated by \n---\n. End with: \n\n全部接受推荐,还是逐个审查?**Recommendation:** or **Recommendation (unverified):** line with **Chosen:** {user's choice}dev-workflow:verify-plan to validate the plan before executiondocs/06-plans/This section contains the plan document format, scope guards, and writing guidelines. Referenced by both this skill and run-phase Step 2.
Write plans assuming the implementing engineer has zero context. Document everything: files to touch, code snippets, commands, expected output.
Absence != deletion: If the current codebase has functionality X that is not mentioned in the scope items, X's status is "unchanged" (keep as-is). Only create deletion/removal tasks when the scope explicitly says "remove", "delete", "移除", or "删除" for that functionality. Design docs showing a target state without feature X does NOT authorize removing X — that's a UX change requiring explicit user instruction in the scope. Exception: if a crystal [D-xxx] decision explicitly calls for removal/replacement, that decision overrides this guard (D-xxx decisions represent user-confirmed intent).
Scope boundary compliance (when crystal file has ## Scope Boundaries):
**Scope conflicts:** subsection after **Crystal file:** in the plan header: IN: {item} requires modifying OUT: {item} — {why}. Do not create the conflicting task; let the verifier and user resolve it.No scope inference: Decomposing a scope item into implementation steps is expected (e.g., "migrate color tokens" → one task per token category). But adding work that addresses a DIFFERENT concern not in the scope items is prohibited, even if it seems like a natural extension (e.g., scope says "migrate color tokens" → adding a font migration task is scope inference). If you believe additional work is necessary, note it in the plan header as "Recommended additions (not in scope)" — do not create tasks for it.
Quality fidelity — If the design doc specifies a concrete approach for a feature (e.g., "LLM analysis", "Bree cron scheduler", "WordPiece tokenizer"), the plan task MUST implement that exact approach. If the specified approach is not feasible in this phase (missing dependency, API not available, infrastructure not ready), the task must:
⚠️ SIMPLIFIED:**Simplification:** field explaining what was changed and why**Design approach:** field stating the original design's approach**Blocking dependency:** field if the simplification is due to a dependency not yet available (informational for human reviewers; not consumed by automated verification)
Silently replacing a design-specified approach with a simpler alternative (heuristic, placeholder, stub) without these annotations is prohibited.---
type: plan
status: active
tags: [tag1, tag2]
refs: []
---
# [Feature Name] Implementation Plan
**Goal:** [One sentence]
**Architecture:** [2-3 sentences on approach and key decisions]
**Tech Stack:** [Key technologies and frameworks]
**Design doc:** [path to design doc, if one exists — links to verify-plan DF strategy]
**Design analysis:** [path to design analysis, if one exists]
**Crystal file:** [path to crystal file, if one exists — links to verify-plan CF strategy]
**Threat model:** [included / not applicable]
---
Each task should be a coherent unit of work. Don't force artificial granularity — a task can be small or substantial as long as it's self-contained and verifiable.
<!-- section: task-N keywords: keyword1, keyword2 -->
### Task N: [Component/Feature Name]
**Files:**
- Create: `exact/path/to/file.ext`
- Modify: `exact/path/to/existing.ext:line-range`
- Test: `tests/exact/path/to/test.ext`
**Steps:**
1. [Clear instruction with code snippet if needed]
2. [Next step]
3. [Verification step with exact command and expected output]
**Verify:**
Run: `<exact command>`
Expected: <what success looks like>
<!-- /section -->
When a task implements part of a design document, add these fields to help execution stay faithful:
**Design ref:** [design-doc.md § Section Name]
**Expected values:** [key concrete values from design that must appear in code]
**Replaces:** [what existing code/pattern this replaces, if any]
**Data flow:** [source → transform → destination]
**Quality markers:** [specific acceptance criteria beyond "it works"]
**Verify after:** [verification specific to design faithfulness]
**UX ref:** [UX-NNN from design doc's UX Assertions table — which assertion(s) this task fulfills]
**User interaction:** [what the user sees and does — derived from User Journeys, not invented]
These fields are optional per-task. Use them when the task has design-critical details that could drift during implementation.
Verify: scope: each task's Verify should contain only that task's specific checks (grep for expected strings, type-check a single file, run a single test file). Full build/test suite belongs exclusively in the final verification task (guideline 11). Do not put npm run build, npm test, swift build, xcodebuild test, or equivalent full-suite commands in intermediate task Verify sections.### Task N: block is wrapped in <!-- section: task-N keywords: {kw1}, {kw2} --> ... <!-- /section -->. Keywords are derived from the task's **Files:** paths and the technologies/APIs the task touches. Use the leaf file name (without extension) and key technology names. 2-4 keywords per task.## UX Assertions section: read the User Journeys and UX Assertions table before writing any UI task. Each task that implements user-visible behavior must include UX ref: pointing to the assertion ID(s) it fulfills, and a brief User interaction: line describing what the user sees and does (derived from the User Journeys, not invented). Tasks that touch UI but don't map to any UX assertion should be flagged with ⚠️ No UX ref: [reason]type is always plan. status is always active when first written. tags — derive 2-5 keywords from the feature name and key technologies in the tasks (e.g., tasks touching SwiftData and sync → [swiftdata, sync, offline]). refs — list the design doc path and crystal file path from the plan header (if set to a real path, not "none").**Verify:** section⚠️ No test: {reason} in the task body. plan-verifier will audit the reasonapple-dev:testing-guideapple-dev:xc-ui-testapple-dev:profilingpackage.json, Package.swift, *.xcodeproj, Cargo.toml, go.mod, pyproject.toml, Makefile)
b. For each build system found: read the file and extract all build/test commands (e.g., scripts in package.json, targets in Makefile)
c. Detect sub-projects: scan for nested directories with their own build system files (e.g., web/package.json, packages/*/package.json). Include their test commands too.
d. Detect package manager: check for lockfiles (pnpm-lock.yaml → pnpm, yarn.lock → yarn, package-lock.json → npm, bun.lockb → bun)
e. Write the final task with ALL discovered commands in **Verify:**. Group by project/sub-project:
### Task N: Full verification
**Verify:**
Run: `pnpm typecheck`
Run: `pnpm test`
Run: `cd web && pnpm test`
Expected: All pass with zero failures
f. If the project has separate test categories (unit, e2e, integration), list each command. Do NOT collapse to a single generic command unless that single command truly runs everything.When the plan goal, task descriptions, or scope items contain any of these security-signal keywords — sandbox, permission, auth, RBAC, deny, allow, isolation, encrypt, token, credential, secret, certificate, injection, escape, validate — the plan MUST include a ## Threat Model section after the plan header and before Task 1. Set the header field **Threat model:** included. When none of these keywords appear, set **Threat model:** not applicable and omit the section.
The Threat Model section contains four subsections:
Attack surface — For each controlled input that could be attacker-influenced, identify the input source and attack class. Example: user-supplied file paths → path traversal/injection; user-supplied regex → ReDoS; user-supplied template strings → template injection.
Failure modes — For each new security component introduced by the plan (permission check, sandbox profile, auth gate, validation layer), document what happens when it fails silently. Acceptable answers: deny-all (safe default), allow-all (unsafe — must justify), crash/abort (acceptable for dev tooling, not for user-facing). Unspecified failure mode = gap the verifier will flag.
Resource lifecycle — For each task that creates temp files, spawns child processes, opens file handles, or opens sockets: document who cleans up on success, on error (catch/finally), and on signal/crash (SIGTERM/SIGINT handler or equivalent). All three triggers must be addressed; missing any one is a gap.
Input validation requirements — For each task that embeds external input into a structured format (SQL, shell commands, SBPL profiles, regex, template strings, URL parameters), document what characters must be validated or escaped and where in the code the validation occurs.
When a crystal file is provided: ensure every [D-xxx] decision is reflected in at least one plan task. Rejected alternatives in the crystal must not appear as plan tasks. Add Crystal ref: [D-xxx] to task headers that implement specific decisions.
When a design analysis is provided: read it and incorporate token mappings, platform translations, and UX assertion validations into the relevant plan tasks.
If any planning finding requires a user choice before execution can proceed, output a ## Decisions section in the plan document. If no decisions needed, output ## Decisions\nNone.
Format per decision:
### [DP-001] {title} ({blocking / recommended})
**Context:** {why this decision is needed, 1-2 sentences}
**Options:**
- A: {description} — {trade-off}
- B: {description} — {trade-off}
**Recommendation:** {option} — {reason, 1 sentence}
Recommendation quality rule:
Router.swift:42 shows routes are registered centrally, extending that pattern is lower-risk"**Recommendation (unverified):** instead of **Recommendation:**, and state why evidence is absentPriority levels:
blocking — must be resolved before plan execution; the dispatcher will ask the user via AskUserQuestionrecommended — has a sensible default but user should confirm; dispatcher presents as batchCommon decision triggers for plan writing:
**Replaces:** patterns where old code removal is non-trivial → confirm removal (blocking)