From claudekit
Generates engineering specs for non-trivial features, changes, or refactors before planning or coding. Covers goals, non-goals, constraints, acceptance criteria, and open questions.
npx claudepluginhub duthaho/claudekit --plugin claudekitThis skill uses the workspace's default tool permissions.
A short structured workflow that turns a vague request ("we need to add X") into a
Transforms project briefs into structured testable specifications with user stories, acceptance criteria, functional/non-functional requirements, technical constraints, and out-of-scope items. Use after brainstorming for implementation planning.
Creates or updates SPEC.md documents from requirements, notes, or interview output, structuring into sections for goals, design, edge cases, security, testing, and success criteria. Use for feature specs.
Creates clear, testable specifications from feature descriptions with user stories, acceptance criteria, and measurable success metrics.
Share bugs, ideas, or general feedback.
A short structured workflow that turns a vague request ("we need to add X") into a written spec that can be reviewed and planned against. The skill exists because the most expensive engineering bug is the wrong feature shipped well — and the second most expensive is the right feature with a missing constraint nobody wrote down. A spec is not a plan; it does not answer "how." It answers what, why, and what's out of bounds. The deliverable is a one-to-three page Markdown document a teammate can read in 5 minutes and sign off on, or push back against. Used before any plan is written.
/claudekit:write-plan and realize you can't define
acceptance criteria yet/claudekit:investigate-root-cause.Goal: Force the spec into a single sentence before any further work.
Inputs: A feature request, ticket, conversation, or vague need.
Actions:
This spec proposes <X> so that <Y>. X is concrete (a behavior, an artifact). Y is the engineering
outcome (not the business outcome — leave that to product docs).Output: The title and the one-sentence summary at the top of the spec file.
Goal: Bound the work explicitly.
Inputs: The Step 1 summary.
Actions:
## Goals section. 3-7 bullets. Each goal is a concrete, observable
outcome — something you could write a test for, even if you won't.## Non-Goals section. 3-5 bullets. Each non-goal is a thing a
reasonable reader might assume is in scope but is not.Output: Two bulleted sections.
Goal: List every external requirement the implementation must respect.
Inputs: The goals from Step 2 plus your knowledge of the existing system.
Actions:
## Constraints section grouped under sub-headings:
OPEN and put it in Step 5's
open questions instead of guessing.Output: A constraints section with at least one entry under each subheading
(or None explicitly stated).
Goal: Define "done" in terms a tester could check.
Inputs: The goals and constraints.
Actions:
## Acceptance Criteria as a numbered list.Given <state>, when <action>, then <expected>
OR <observable behavior> is <verifiable measurement>. No "system should be
robust"; instead "system handles 10k concurrent connections without dropping
below p99 < 500ms."Output: A numbered list of falsifiable criteria.
Goal: Surface what you don't know before someone discovers it mid-implementation.
Inputs: Honest reflection on the spec.
Actions:
## Open Questions. Each question:
Output: A list of questions, each with a who knows and impact if wrong
note.
| Excuse | Why it sounds reasonable | Why it's wrong | What to do instead |
|---|---|---|---|
| "We can figure this out as we go." | Some specs really are just-in-time. Over-specifying upfront is a known failure mode. | "Figure it out as we go" is the line said before the implementation reveals a constraint nobody wrote down, scope creeps to cover that constraint, and the work doubles. The cost of writing a 1-page spec is 30 minutes; the cost of discovering the missing constraint at code-review time is the round trip plus a partial rewrite. | If the change touches one file and one author, skip the spec. If it touches more than one of either, the 30 minutes is cheaper than the round trip. |
| "The non-goals are obvious — I don't need to write them down." | Stating the obvious feels condescending in writing. | The non-goals only feel obvious to the spec author. The reviewer who pushes back on "why didn't you also handle X" hasn't read your mind, only your spec. Unwritten non-goals get implemented anyway, doubling the work, or get cut at the end, leaving someone disappointed. | Write the 3-5 non-goals even if they feel obvious. They're not for you; they're for the reviewer who hasn't been in your head. |
| "We'll add tests later — acceptance criteria can be vague for now." | Acceptance criteria do mature during implementation. Premature specificity can lock in the wrong thing. | Vague acceptance criteria are how "done" becomes negotiable. Without falsifiable criteria, the engineer who finishes and the reviewer who signs off are negotiating on vibes. The work merges, then someone discovers the missing case in production. | Write at least one falsifiable criterion per goal. If you can't, the goal isn't concrete enough — fix the goal first. |
| "There are no open questions, this one's clear." | Sometimes a spec really is well-understood. Forcing questions for show is performative. | "No open questions" almost always means "I haven't looked hard enough." Every spec interacts with infra, data, or upstream/downstream systems, each of which has assumptions you haven't audited. The questions are real; you just haven't asked them. | List at least one open question. If you can't find a real one, write down the assumption you're least sure of as a question — "Are we sure component X behaves this way under load?" That assumption is your weakest link. |
| "We don't need a constraints section for an internal tool." | Internal tools are real, and the formality of constraint-writing fits external APIs better. | Internal tools have constraints too — runtime version, deploy environment, who can call them, data sensitivity. Skipping the section because "internal" is how the internal tool ends up using a deprecated runtime, a soon-to-be-removed library, or storing data the org isn't allowed to log. | Write the constraints section. For internal tools, "Compatibility: must run on the org's standard Python 3.11 runtime" and "Security: must not log PII" are short and they matter. |
| Checkpoint | Required artifact | What "no evidence" looks like |
|---|---|---|
| End of Step 1 | One-sentence summary in <X> so that <Y> form | "It's a feature for the dashboard." |
| End of Step 2 | Goals + Non-Goals sections, both populated | "Goals are obvious from context." |
| End of Step 3 | Constraints section with all four subheadings present | "I'll add constraints if the reviewer asks." |
| End of Step 4 | Acceptance criteria, one falsifiable item per goal | "It should work well." |
| End of Step 5 | At least one open question with who/impact annotations | "Nothing open at this time." |