Use when you have a spec or requirements for a multi-step task, before touching code
From zenflownpx claudepluginhub brewpirate/zen-flow --plugin zenflowThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Write comprehensive implementation plans as if handing them to a junior developer on their first day with this codebase. Document everything: which files to touch, complete code for every step, exact commands to run, and how to verify each change. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent checkpoints.
Assume they can write code but know nothing about barf's architecture, conventions, or testing patterns. Leave nothing implicit.
Announce at start: "I'm using the zenflow:plan skill to create the implementation plan."
Save plans to: resources/plans/NNN-descriptive-name.md
ls -r resources/plans/ to find the highest existing NNN, increment by 1, zero-pad to 3 digits225-three-layer-prompts.md is highest, next is 226-your-feature-name.mdIf the spec covers multiple independent subsystems, it should have been broken into sub-project specs during zenflow:idea. If it wasn't, suggest breaking this into separate plans — one per subsystem. Each plan should produce working, testable software on its own.
Before defining tasks, map out which files will be created or modified and what each one is responsible for. This is where decomposition decisions get locked in.
This structure informs the task decomposition. Each task should produce self-contained changes that make sense independently.
Each step is one action (2-5 minutes):
Every plan MUST start with this header:
---
status: planned
---
# [Feature Name] Implementation Plan
> **For agentic workers:** Use zenflow:dispatch (recommended) or zenflow:exec-plan to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** [One sentence describing what this builds]
**Architecture:** [2-3 sentences about approach]
**Tech Stack:** [Key technologies/libraries]
---
### Task N: [Component Name]
**Files:**
- Create: `packages/core/src/core/new-file.ts`
- Modify: `packages/core/src/core/existing-file.ts:123-145`
- Test: `tests/unit/new-file.test.ts`
- [ ] **Step 1: Write the failing test**
```typescript
// Use the project's test framework (Jest, Vitest, bun:test, pytest, etc.)
import { describe, it, expect } from '<test-framework>'
import { specificFunction } from '<project-import-path>'
describe('specificFunction', () => {
it('should return expected result for valid input', () => {
const result = specificFunction({ id: '001', config })
expect(result).toBe(expected)
})
})
```
- [ ] **Step 2: Run test to verify it fails**
Run: `<project-test-command> <test-file-path>`
Expected: FAIL with "module not found"
- [ ] **Step 3: Write minimal implementation**
```typescript
export function specificFunction(opts: {
id: string
config: Config
}): ExpectedType {
const { id, config } = opts
return expected
}
```
- [ ] **Step 4: Run test to verify it passes**
Run: `<project-test-command> <test-file-path>`
Expected: PASS
- [ ] **Step 5: Checkpoint — verify all tests pass before moving to next task**
Every step must contain the actual content an engineer needs. These are plan failures — never write them:
Every plan MUST end with a ## Verification section listing how to confirm the work is correct. This feeds zenflow:check-work and gives the executing agent clear success criteria.
Include:
After writing the complete plan, look at the spec with fresh eyes and check the plan against it. This is a checklist you run yourself — not a subagent dispatch.
1. Spec coverage: Skim each section/requirement in the spec. Can you point to a task that implements it? List any gaps.
2. Placeholder scan: Search your plan for red flags — any of the patterns from the "No Placeholders" section above. Fix them.
3. Type consistency: Do the types, method signatures, and property names you used in later tasks match what you defined in earlier tasks? A function called clearLayers() in Task 3 but clearFullLayers() in Task 7 is a bug.
If you find issues, fix them inline. No need to re-review — just fix and move on. If you find a spec requirement with no task, add the task.
After the self-review passes, add a Subagent Recommendation section to the end of the plan. This tells zenflow:exec-plan how to staff the work.
Include:
Example:
## Subagent Recommendation
- **3 subagents** for parallel execution
- Agent 1 (Senior Developer): Tasks 1-2 (schema + validation) — skills: zod-schema
- Agent 2 (Senior Developer): Tasks 3-4 (API routes + error handling) — sequential
- Agent 3 (Senior Developer): Task 5 (context bundle changes)
- Tasks 1-2 and 3-4 and 5 are independent — run all 3 agents in parallel
- Task 6 (tests) depends on all others — run after agents complete
- **Required skill for all agents:** testing-anti-patterns
After saving the plan, offer execution choice:
"Plan complete and saved to resources/plans/NNN-name.md. Two execution options:
1. Subagent-Driven (recommended) — I dispatch subagents per the recommendation above, parallel execution, fast iteration
2. Inline Execution — Execute tasks in this session using zenflow:exec-plan, batch execution with checkpoints
Which approach?"
If Subagent-Driven chosen:
If Inline Execution chosen: