From claudekit
Generates numbered task lists from specs before coding, detailing file paths, exact changes, tests, and success evidence for each task. Use for multi-file, multi-day, or team projects.
npx claudepluginhub duthaho/claudekit --plugin claudekitThis skill uses the workspace's default tool permissions.
A workflow for converting a spec into an executable plan: a numbered list of tasks,
Writes detailed implementation plans from specs for multi-step tasks before coding, with file structure maps, TDD bite-sized steps, and markdown tracking format.
Writes implementation plans from specs for multi-step tasks, mapping files and breaking into TDD bite-sized steps before coding.
Creates detailed implementation plans from specs for multi-step tasks before coding, with file structure mapping, bite-sized TDD steps, architecture overview, and tech stack.
Share bugs, ideas, or general feedback.
A workflow for converting a spec into an executable plan: a numbered list of tasks,
each with a file path, an exact change, a test command, and an acceptance check.
The skill exists because the most common implementation failure is starting from a
plan that says "implement the cache" and discovering at code-review time that
"implement" hid four sub-decisions nobody made. A plan written through this skill
makes those decisions visible up front. Each task is small enough that a different
engineer could pick it up cold; the entire plan is small enough that the spec's
acceptance criteria are obviously reachable from it. Used after shape-spec,
before code or plan-review.
plan-review and the reviewer said the plan is too vague to evaluateplan-review against it, not rewriting itshape-spec firstGoal: Avoid planning against a spec that itself is incomplete.
Inputs: The spec produced by shape-spec (or equivalent).
Actions:
shape-spec.Output: Confirmation note Spec sufficient or a list of return-to-spec items.
Goal: Generate a flat numbered task list. No sub-tasks; flatness forces honesty about size.
Inputs: The spec's Acceptance Criteria.
Actions:
<N>. <file_path> — <verb> <specific change>. Test: <command>.Output: A numbered, flat task list in a Markdown file at
docs/claudekit/plans/<spec-basename>-plan.md.
Goal: Make the order of operations explicit.
Inputs: The task list from Step 2.
Actions:
Blocked by: <task numbers> and Parallel with: <task numbers> line
under each task that has either.Output: Each task annotated with dependency and parallelism metadata.
Goal: Define what "this task is done" means concretely, so the implementer knows when to move on and the reviewer knows what to look for.
Inputs: The task list from Step 3.
Actions:
Acceptance: line with a concrete observable check.
Output: Each task has an Acceptance: line.
Goal: Surface the parts of the plan that could go wrong.
Inputs: The fully annotated task list.
Actions:
## Risks section at the bottom of the plan. List each task that:
Rollback: NOT POSSIBLE — see plan-review-architecture and flag for the
architecture reviewer.Output: A Risks section. Plan ready for plan-review.
| Excuse | Why it sounds reasonable | Why it's wrong | What to do instead |
|---|---|---|---|
| "I'll figure out the file paths when I get there." | The plan is for high-level sequencing, not low-level layout. | Plans without file paths are wishlists. They survive review by saying "implement the X" and rot at implementation time when the engineer realizes "X" actually splits across three files with conflicting conventions. The decisions you defer to "when I get there" are the decisions plan-review exists to catch. | Name the file path even if it doesn't exist yet. src/handlers/billing/charge.ts (new) is fine. Naming forces you to think about layout before you've written any code. |
| "Test commands will be obvious." | If the project has one test runner, every task uses it; explicit naming feels redundant. | "Obvious" assumes the implementer is you, today. A teammate (or you, in three weeks) won't remember whether this task wants pytest -k or the integration suite or the contract test. Naming the exact command in the plan saves the implementer one round trip and avoids the "I ran the wrong tests" PR. | Paste the exact command. pytest tests/billing/test_charge.py -k test_idempotency. If three tasks have the same command, that's fine; copy-pasting is cheap. |
| "Acceptance is just 'tests pass.'" | TDD culture tells us tests are the contract. | "Tests pass" is necessary, not sufficient. A task can pass tests and still not satisfy the acceptance criterion if the test was scoped wrong, the wrong cases were covered, or the criterion includes something tests don't catch (a UX flow, a perf budget, a doc update). The acceptance line names which observable thing proves the task done. | Write the acceptance line as: "Test X passes AND when I run Y in dev, I see Z." Most tasks need both halves. |
| "I don't need parallelism notes — we'll figure it out as we go." | Most plans are executed sequentially anyway. Annotating parallelism for a one-person project is overhead. | The annotation is cheap; the absence is expensive when a second person joins or the same engineer wants to pick the next task while CI runs the previous one. The cases where parallelism notes don't matter are also the cases where adding them takes 60 seconds. | If the plan has more than 5 tasks, add the parallelism notes. They're not for the first author; they're for whoever is on the project a week from now. |
| "Rollback is the deploy team's problem." | Some risks really do live in the deploy step, owned by SRE / ops. | "Their problem" assumes someone else has the context to write the rollback. They don't. The engineer who wrote the migration knows what to undo; SRE can run the rollback but can't author it. The plan owner is the right author of the rollback note. | Write the one-line rollback yourself. If you don't know what it would be, you don't know what risk you're taking. Flag it; don't punt it. |
| Checkpoint | Required artifact | What "no evidence" looks like |
|---|---|---|
| End of Step 1 | A Spec sufficient confirmation OR a list of items to return to the spec | "I read the spec, I think it's fine." |
| End of Step 2 | A flat numbered task list with file paths and exact test commands | "Step 1: implement the feature." |
| End of Step 3 | Each task annotated with Blocked by / Parallel with (where applicable) | "We'll figure out the order during implementation." |
| End of Step 4 | Each task has an Acceptance: line with concrete observable check | "Tests pass." |
| End of Step 5 | A ## Risks section with rollback notes for high-risk tasks | "We'll deal with rollback if it comes up." |
Acceptance: line. You don't know how the implementer will know
they're done.