From ai-tooling
Also use after brainstorming when the task involves 3+ files or multiple implementation steps. A conversation that evolved through brainstorming into a confirmed design MUST invoke this skill before writing any code, even if the user never explicitly said "write a plan". DO NOT. TRIGGER WHEN: (1) user says 'write a plan', 'create a plan', 'implementation plan', 'plan this', 'break this into tasks'; (2) the conversation has produced a design, spec, or set of decisions and is naturally transitioning toward implementation -- e.g., the user approved an approach, confirmed architecture choices, or said "let's do it" / "go ahead" / "proceed". A conversation that evolved through brainstorming into a confirmed design MUST invoke this skill before writing any code, even if the user never explicitly said "write a plan". DO NOT TRIGGER WHEN: user wants to brainstorm first (use brainstorming), wants to execute an existing plan (use executing-plans), or is doing a simple one-file change.
npx claudepluginhub acaprino/alfio-claude-plugins --plugin ai-toolingThis skill uses the workspace's default tool permissions.
Source: Ported from [obra/superpowers](https://github.com/obra/superpowers) -- `skills/writing-plans`
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Source: Ported from obra/superpowers -- skills/writing-plans
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.
Announce at start: "I'm using the writing-plans skill to create the implementation plan."
Context: This should be run in a dedicated worktree (created by brainstorming skill).
Save plans to: docs/plans/YYYY-MM-DD-<feature-name>.md
If the spec covers multiple independent subsystems, it should have been broken into sub-project specs during brainstorming. If it wasn't, suggest breaking this into separate plans -- one per subsystem. Each plan should produce working, testable software on its own.
If the plan involves UI or frontend work (new views, layouts, components, visual redesigns), generate a standalone HTML mockup before writing the detailed task list:
.html file with React and a UI library (shadcn/ui, Radix UI, daisyUI, or other appropriate library) loaded from CDN (esm.sh, unpkg, cdn.tailwindcss.com), showing the full layout with:
docs/plans/YYYY-MM-DD-<feature-name>-mockup.htmlThis avoids investing in a detailed plan for a layout the user hasn't validated visually.
Skip this step if: the task is backend-only, CLI-only, or the user explicitly says they don't need a mockup.
Before defining tasks, map out which files will be created or modified and what each one is responsible for. This is where decomposition decisions get locked in.
This structure informs the task decomposition. Each task should produce self-contained changes that make sense independently.
Each step is one action (2-5 minutes):
Every plan MUST start with this header:
# [Feature Name] Implementation Plan
> **For agentic workers:** Use subagent-driven execution (if subagents available) or ai-tooling:executing-plans to implement this plan. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** [One sentence describing what this builds]
**Architecture:** [2-3 sentences about approach]
**Tech Stack:** [Key technologies/libraries]
---
### Task N: [Component Name]
**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`
- [ ] **Step 1: Write the failing test**
```python
def test_specific_behavior():
result = function(input)
assert result == expected
```
- [ ] **Step 2: Run test to verify it fails**
Run: `pytest tests/path/test.py::test_name -v`
Expected: FAIL with "function not defined"
- [ ] **Step 3: Write minimal implementation**
```python
def function(input):
return expected
```
- [ ] **Step 4: Run test to verify it passes**
Run: `pytest tests/path/test.py::test_name -v`
Expected: PASS
- [ ] **Step 5: Commit**
```bash
git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"
```
Every step must contain the actual content an engineer needs. These are plan failures -- never write them:
After writing the complete plan, look at the spec with fresh eyes and check the plan against it. This is a checklist you run yourself -- not a subagent dispatch.
1. Spec coverage: Skim each section/requirement in the spec. Can you point to a task that implements it? List any gaps.
2. Placeholder scan: Search your plan for red flags -- any of the patterns from the "No Placeholders" section above. Fix them.
3. Type consistency: Do the types, method signatures, and property names you used in later tasks match what you defined in earlier tasks? A function called clearLayers() in Task 3 but clearFullLayers() in Task 7 is a bug.
If you find issues, fix them inline. No need to re-review -- just fix and move on. If you find a spec requirement with no task, add the task.
After saving the plan:
"Plan complete and saved to docs/plans/<filename>.md. Ready to execute?"
Execution path depends on harness capabilities:
If harness has subagents (Claude Code, etc.):
If harness does NOT have subagents: