From spec-dd
This skill should be used when the user asks to "write specifications", "create test specifications", "specification-driven development", "spec-first", "behavioral specs", "derive test scenarios", "test implementation specification", "check specification alignment", "review specs", "verify implementation", or "spec-dd". Also triggers when the user mentions "SDD", "SDD-TDD", "spec-driven", "behavioral testing workflow", "test-first design", or asks about writing specifications before code, deriving tests from specs, or verifying implementation against specifications. Supports a full workflow walkthrough or focusing on individual phases.
npx claudepluginhub florianbuetow/claude-code --plugin spec-ddThis skill uses the workspace's default tool permissions.
Orchestrate a spec-first development workflow: behavioral specification, test
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Orchestrate a spec-first development workflow: behavioral specification, test scenario derivation, test implementation planning, test implementation, feature implementation, and cross-artifact alignment review. The skill guides writing behavioral specifications, derives test scenarios, plans test implementation approaches, and verifies alignment across all artifacts and code. Quality gates between phases are advisory — the skill flags issues and recommends addressing them, but the user can override and proceed.
| Command | Phase | Reference |
|---|---|---|
/spec-dd | Auto-detect phase, assess state, recommend next step | All references |
/spec-dd:spec | Behavioral Specification | references/specification.md |
/spec-dd:test | Test Specification | references/test-specification.md |
/spec-dd:test-impl | Test Implementation Specification | references/test-implementation-specification.md |
/spec-dd:verify | Implementation Verification | references/verify.md |
/spec-dd:review | Alignment Review | references/review.md |
All commands accept an optional feature name argument (e.g., /spec-dd:spec user-auth).
/spec-dd:verify also accepts a spec file path as its first argument
(e.g., /spec-dd:verify specifications.md chat-ui). This allows verification against
any spec file, including informal ones not created through the spec-dd workflow.
If no feature name is provided and multiple features exist, list available features and
ask the user to choose.
All artifacts live in docs/specs/ with one set of files per feature:
| Artifact | Filename |
|---|---|
| Behavioral Specification | docs/specs/<feature>-specification.md |
| Test Specification | docs/specs/<feature>-test-specification.md |
| Test Implementation Specification | docs/specs/<feature>-test-implementation-specification.md |
| Implementation Verification | docs/specs/<feature>-verification.md |
| Implementation Review | docs/specs/<feature>-implementation-review.md |
When any command is invoked:
Read the relevant reference file from references/ BEFORE doing anything else.
/spec-dd:spec -> read references/specification.md/spec-dd:test -> read references/test-specification.md/spec-dd:test-impl -> read references/test-implementation-specification.md/spec-dd:verify -> read references/verify.md/spec-dd:review -> read references/review.md/spec-dd -> read whichever reference applies to the recommended phaseAuto-detect project language by scanning for manifest files:
package.json (JavaScript/TypeScript)requirements.txt, pyproject.toml (Python)go.mod (Go)Cargo.toml (Rust)pom.xml, build.gradle (Java/Kotlin)For /spec-dd (router): Follow the auto-detect router logic below.
For /spec-dd:<phase>: Check whether prior-phase artifacts exist. If gaps
are found, advise the user which earlier phase to complete first, but do not
block — proceed if the user chooses to continue.
/spec-dd:spec)Reference: Read references/specification.md before starting.
Purpose: Define what the system does — behavioral contracts, not implementation details.
Workflow:
[NEEDS CLARIFICATION].docs/specs/<feature>-specification.md.Advisory gate: No unresolved [NEEDS CLARIFICATION] markers before proceeding
to Phase 2.
Artifact template:
# <Feature> - Behavioral Specification
## Objective
What this feature does and why it exists.
## User Stories & Acceptance Criteria
Numbered user stories, each with measurable acceptance criteria.
## Constraints
Technical, business, or regulatory constraints.
## Edge Cases
Boundary conditions, error states, unusual inputs.
## Non-Goals
What this feature explicitly does NOT do.
## Open Questions
Items marked [NEEDS CLARIFICATION] that must be resolved.
/spec-dd:test)Reference: Read references/test-specification.md before starting.
Purpose: Derive test scenarios from the behavioral spec only — no implementation knowledge.
Pre-check: Verify that <feature>-specification.md exists. If it does not,
advise the user to complete Phase 1 first. If the user chooses to proceed anyway,
note the risk and continue.
Workflow:
docs/specs/<feature>-test-specification.md.Advisory gate: Full traceability between spec requirements and test scenarios before proceeding to Phase 3.
Artifact template:
# <Feature> - Test Specification
## Coverage Matrix
Table mapping each acceptance criterion to its test scenarios.
## Test Scenarios
Given/When/Then format. Grouped by user story or functional area.
## Edge Case Scenarios
Boundary conditions derived from the specification's edge cases section.
## Traceability
Summary confirming every acceptance criterion is covered.
/spec-dd:test-impl)Reference: Read references/test-implementation-specification.md before starting.
Purpose: Map every test scenario to a technical approach for implementing it as actual test code.
Pre-check: Verify that <feature>-test-specification.md exists. If it does
not, advise the user to complete Phase 2 first. Proceed if the user overrides.
Workflow:
docs/specs/<feature>-test-implementation-specification.md.Advisory gate: Every test scenario mapped to a test implementation approach before proceeding to Phase 4.
Artifact template:
# <Feature> - Test Implementation Specification
## Test Framework & Conventions
Detected stack, test framework, test runner, conventions.
## Test Structure
How tests are organized: files, classes/modules, naming conventions.
## Test Scenario Mapping
Map each test scenario to a test function/method with setup and assertion strategy.
## Fixtures & Test Data
Shared fixtures, factories, test data approach, setup/teardown.
## Alignment Check
Confirmation that every test scenario has a test implementation approach.
Purpose: Actual test code is written by a coding agent following the test implementation specification. Tests must initially FAIL because the feature code does not exist yet. This skill does NOT write test code.
Workflow:
docs/specs/<feature>-test-implementation-specification.md).Purpose: Production code is written to make all tests pass. This skill does NOT write implementation code.
Workflow:
/spec-dd:verify)Reference: Read references/verify.md before starting.
Purpose: Check whether the implementation satisfies the specification at the requirement level. Unlike the review phase (which checks artifact alignment), verification answers: "Did we build what the spec says?"
Key difference from review: Verification works with any spec file — including informal documents not created through spec-dd. It produces a requirement-level PASS/FAIL checklist, not a severity-classified alignment report.
Pre-check: Implementation code must exist. If no code has been written yet, advise the user to complete Phase 5 first.
Workflow:
/spec-dd:verify specifications.md). Otherwise, look for
docs/specs/<feature>-specification.md. If neither exists, ask the user to
provide the spec file path.references/verify.md: numbered items,
acceptance criteria, RFC-style keywords (MUST/SHALL), and behavioral
descriptions. Assign each an ID (R01, R02, ...).docs/specs/<feature>-verification.md
(or docs/specs/verify-<spec-filename>-<date>.md if no feature name).Advisory gate: All requirements PASS or PARTIAL (no FAIL) before proceeding to Phase 7 (Review).
Artifact template:
# Verification: <spec-file>
| Field | Value |
|-------|-------|
| Spec file | `<path to spec>` |
| Feature | <feature name or "N/A"> |
| Date | <today's date> |
## Requirements
| # | Requirement | Status | Evidence |
|---|-------------|--------|----------|
| R01 | <requirement> | PASS | `file:line` — description |
| R02 | <requirement> | FAIL | Expected X, found nothing |
| R03 | <requirement> | PARTIAL | `file:line` — works but missing Y |
## Summary
X/Y PASS, Z FAIL, W PARTIAL
## Gaps Requiring Action
| # | Requirement | Issue | Suggested Fix |
|---|-------------|-------|---------------|
| R02 | <requirement> | Not implemented | <suggested approach> |
## Notes
Observations about non-deterministic tests, environment issues, or spec ambiguities.
/spec-dd:review)Reference: Read references/review.md before starting.
Purpose: Verify alignment across all artifacts and actual code.
Pre-check: Check that all three specification documents exist for the feature:
<feature>-specification.md, <feature>-test-specification.md, and
<feature>-test-implementation-specification.md. Flag any that are missing but
proceed with what is available.
Workflow:
docs/specs/<feature>-implementation-review.md.Artifact template:
# <Feature> - Implementation Review
## Specification Alignment
Cross-check between the three spec documents.
## Code Alignment
Actual test code vs test specification. Actual implementation vs implementation specification.
## Test Execution
Test runner detected, command used, pass/fail results, failure details if any.
## Coverage Report
Gaps, misalignments, unresolved items.
## Status
Pass/fail summary per check.
## Recommendations
Next steps if issues are found.
/spec-dd)When /spec-dd is invoked (without a specific phase subcommand):
docs/specs/ for *-specification.md files to discover features. If
the directory does not exist or is empty, ask the user for a feature name and
start at Phase 1.<feature>-specification.md exist? Any [NEEDS CLARIFICATION] markers?<feature>-test-specification.md exist? Does it cover all acceptance
criteria from the behavioral spec?<feature>-test-implementation-specification.md exist? Does it address
all test scenarios?<feature>-verification.md exist? Are all requirements PASS or PARTIAL?<feature>-implementation-review.md exist? Is it current?All gates are advisory — the skill flags issues and recommends addressing them, but the user can override and proceed.
| Transition | Gate Check |
|---|---|
| Spec -> Test Spec | No unresolved [NEEDS CLARIFICATION] markers |
| Test Spec -> Test Impl Spec | Full traceability: every acceptance criterion mapped to test scenarios |
| Test Impl Spec -> Test Impl | Every test scenario mapped to a test implementation approach |
| Test Impl -> Feature Impl | Test files exist and all tests FAIL (feature not yet implemented) |
| Feature Impl -> Verify | Implementation code exists |
| Verify -> Review | All requirements PASS or PARTIAL (no FAIL) |
| Review -> Done | All checks pass in the review report, including test execution |
The workflow supports non-linear progression:
/spec-dd:spec, /spec-dd:test, etc.).These are guidelines, not laws. Apply judgment: