Use when design is complete and you need detailed implementation tasks for engineers with zero codebase context - creates comprehensive implementation plans with exact file paths, complete code examples, and verification steps assuming engineer has minimal domain knowledge
Creates detailed implementation plans with exact file paths, complete code examples, and verification steps for engineers with zero codebase context. Use after design is complete to break down features into bite-sized tasks.
/plugin marketplace add pproenca/dot-claude-old/plugin install workflow@dot-claudeThis skill is limited to using the following tools:
Before researching the codebase, offer to search external documentation for the latest API references using Context7 MCP.
First, check if Context7 MCP tools are available by looking for mcp__context7__ tools in your available tools list.
If Context7 is NOT available:
If Context7 IS available:
Use AskUserQuestion:
Question: "Would you like me to search external documentation before planning?"
Header: "Docs"
multiSelect: false
Options:
- Yes, let me specify: I'll enter which libraries/frameworks to search
- Auto-detect from task: Analyze the task and search relevant libraries automatically
- No, skip: Proceed with planning without external docs
If user selected "Yes, let me specify":
If user selected "Auto-detect from task":
If user selected "No, skip":
For each library the user confirms:
Resolve library ID:
mcp__context7__resolve-library-id(libraryName: "library-name")
Select the most relevant match based on description and documentation coverage.
Fetch relevant docs:
mcp__context7__get-library-docs(
context7CompatibleLibraryID: "/org/project",
topic: "[relevant topic from task]",
mode: "code"
)
Use mode: "info" for architectural/conceptual questions.
Use for context only: Keep documentation in working memory to inform plan tasks. Do NOT include raw docs in the plan document.
If tool call fails: Inform user that Context7 couldn't fetch docs for that library and continue with available information.
After documentation is loaded (or skipped), proceed to Python Project Detection with documentation context available.
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.
Announce at start: "I'm using the writing-plans skill to create the implementation plan."
Before writing a plan, detect if this is a Python project and what framework it uses:
Detection signals:
pyproject.toml or setup.py → Python projectfastapi in dependencies → FastAPI projectdjango in dependencies → Django projectasyncio imports or async def → Async code.python-version or uv.lock → Uses uv package managerWhen Python detected:
python:python-testingpython:python-projectuv run prefix for all Python commandsWhen async/performance code detected:
python:python-performanceWhen FastAPI/Django detected:
python:python-expert agent for framework-specific patternsFor complex plans (5+ tasks), use TodoWrite to track progress:
Context: This should be run in a dedicated worktree (created by brainstorming skill).
Save plans to: docs/plans/YYYY-MM-DD-<feature-name>.md
Each step is one action (2-5 minutes):
Every task MUST include a complexity tag. This enables efficient execution.
| Complexity | Examples | TDD? | Code Review? |
|---|---|---|---|
| TRIVIAL | Delete file, rename, typo fix, config update | No | Parent verifies git diff |
| SIMPLE | Small refactor, single-file change, add comment | If code changes | Haiku (optional) |
| MODERATE | Feature implementation, bug fix with tests | Yes | Sonnet |
| COMPLEX | Multi-file feature, architectural change | Yes | Opus |
Classification heuristics:
Every plan MUST start with this header:
# [Feature Name] Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use workflow:executing-plans to implement this plan task-by-task.
**Goal:** [One sentence describing what this builds]
**Architecture:** [2-3 sentences about approach]
**Tech Stack:** [Key technologies/libraries]
---
### Task N: [Component Name]
**Complexity:** [TRIVIAL | SIMPLE | MODERATE | COMPLEX]
**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`
**Step 1: Write the failing test**
```python
def test_specific_behavior():
result = function(input)
assert result == expected
Step 2: Run test to verify it fails
Run: uv run pytest tests/path/test.py::test_name -v
Expected: FAIL with "function not defined"
Step 3: Write minimal implementation
def function(input):
return expected
Step 4: Run test to verify it passes
Run: uv run pytest tests/path/test.py::test_name -v
Expected: PASS
Step 5: Commit
git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"
## Remember
- Exact file paths always
- Complete code in plan (not "add validation")
- Exact commands with expected output
- Reference relevant skills with @ syntax
- DRY, YAGNI, TDD, frequent commits
## Python-Specific Patterns
When Python detected, load patterns from the python plugin instead of duplicating them here.
**Step 1: Load relevant skill**
Use Skill tool: python:python-testing
**Step 2: Copy patterns into plan tasks**
- Fixtures from skill → conftest.py setup task
- Parameterized tests from skill → test task examples
- Mocking patterns from skill → integration test examples
**For async/performance code detected:**
Use Skill tool: python:python-performance
**For FastAPI/Django detected:**
Task tool (python:python-expert): prompt: "Provide [framework] test patterns for [feature]"
## Diagram Generation Phase
Diagrams help Claude understand complex plans during execution. They're not just for humans - they serve as a reference that helps Claude maintain context during multi-step implementations.
### Step 1: Ask About Diagrams
Use AskUserQuestion:
Question: "Should I generate diagrams to help with plan execution?" Header: "Diagrams" multiSelect: false Options:
**Default to "Auto-detect"** for complex plans. Only skip diagrams for trivial changes.
### Step 2: Generate Diagrams
If user selects "Auto-detect" or specific types, dispatch diagram-generator agent:
Task tool (doc:diagram-generator): description: "Generate Mermaid diagrams for plan" prompt: | See template at plugins/methodology/workflow/templates/diagram-prompt.md
MODE: [auto-detect | specific]
DIAGRAM_TYPES: [user's selection or "auto"]
PLAN_CONTENT: [full plan text]
**For auto-detect mode**, the agent will:
1. Analyze plan complexity and structure
2. Decide IF any diagrams would help Claude execute
3. Select the most useful diagram type(s)
4. Skip if plan is simple enough that diagrams add no value
### Step 3: Insert Diagrams
Add `## Diagrams` section to the plan document after the header block, before first task:
```markdown
# [Feature Name] Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL...
**Goal:** ...
**Architecture:** ...
**Tech Stack:** ...
---
## Diagrams
[Insert agent output here]
---
### Task 1: ...
Don't even ask about diagrams when:
In these cases, just skip to execution handoff.
After saving the plan, use the AskUserQuestion tool (do NOT output as plain text):
Question: "Plan saved. How would you like to execute it?" Header: "Execute" Options:
If Subagent-Driven chosen:
workflow:subagent-devIf Parallel Session chosen:
workflow:executing-plansIf Skip chosen:
When writing Python plans, integrate these python plugin skills:
| Skill | When to Reference | What It Provides |
|---|---|---|
python:python-testing | All Python test code | Fixtures, mocking, parameterized tests, markers |
python:python-project | Python commands, dependencies, packaging | uv run, uv add, pyproject.toml, publishing |
python:python-performance | Async or performance code | asyncio, profiling, caching, optimization patterns |
For complex Python plans, dispatch specialized agents:
Task tool (python:python-expert):
description: "Get FastAPI/Django patterns for [feature]"
prompt: "Analyze [feature requirements] and provide:
1. Framework-specific implementation pattern
2. Test fixtures and patterns
3. Common pitfalls to avoid"
For Python projects, enhance the header:
# [Feature Name] Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use workflow:executing-plans to implement this plan task-by-task.
> **Python Skills:** Reference python:python-testing for tests, python:python-project for uv commands.
**Goal:** [One sentence describing what this builds]
**Architecture:** [2-3 sentences about approach]
**Tech Stack:** Python 3.12+, pytest, [framework if applicable]
**Commands:** All Python commands use `uv run` prefix
---
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.