npx claudepluginhub swingerman/atdd --plugin atddThis skill uses the workspace's default tool permissions.
Enforce the ATDD workflow for feature development. This methodology is
Orchestrates three-agent ATDD team (spec-writer, implementer, reviewer) through five phases: spec writing/review, pipeline generation, implementation, post-review. Extends existing teams.
Generates failing acceptance tests using the TDD cycle for ATDD workflows. Activates when users mention writing acceptance tests or doing ATDD.
Enforces strict test-driven development for new features, bug fixes, and refactoring, requiring failing tests before any production code.
Share bugs, ideas, or general feedback.
Enforce the ATDD workflow for feature development. This methodology is adapted from Robert C. Martin's acceptance test approach.
"The two different streams of tests cause Claude to think much more deeply about the structure of the code." — Robert C. Martin
Two test streams constrain development:
Both must pass. Neither alone is sufficient.
Follow these steps strictly, in order. Do not skip steps.
Before writing anything, understand what is being built:
Create spec files in the project's specs/ directory using this format:
;===============================================================
; Description of the behavior being specified.
;===============================================================
GIVEN [precondition in domain language].
GIVEN [another precondition if needed].
WHEN [action the user/system takes].
THEN [observable outcome].
THEN [another observable outcome if needed].
Format rules:
=== separators delimit test casesThe spec-leakage rule — CRITICAL:
Specs must describe external observables only. Never reference:
BAD: GIVEN the UserService has an empty userRepository.
GOOD: GIVEN there are no registered users.
BAD: WHEN a POST request is sent to /api/users.
GOOD: WHEN a new user registers with email "bob@example.com".
BAD: THEN the database contains 1 row in the users table.
GOOD: THEN there is 1 registered user.
Present specs to the user for approval before proceeding. Specs are co-authored, but the human has final approval — ferociously defended.
Invoke the pipeline-builder agent to analyze the project and generate
(or update) the three-stage pipeline:
.txt spec files from specs/, produces structured IRThe pipeline must have deep knowledge of the system internals. This is NOT Cucumber. The generator produces complete, runnable tests that call into the system's internals — not stubs requiring manual fixtures.
A runner script/command should be generated so the user can run:
# Full pipeline: parse specs → generate tests → run tests
./run-acceptance-tests.sh
Run the generated acceptance tests. They should fail — this confirms the specs describe behavior that doesn't exist yet.
If they pass, either:
Now implement the feature using standard TDD:
Both streams must pass:
After implementation, invoke the spec-guardian agent to review all
spec files for implementation details that may have crept in during
development.
If leakage is found, clean the specs back to domain language.
Return to Step 1 for the next feature. Each iteration adds specs only for the current feature — never design the whole system upfront.
These rules govern how spec files and the pipeline are handled. They are non-negotiable.
.txt file without explicit user permission.
Specs are the user's contract. Always ask before changing them.generated-acceptance-tests/.
Only delete and regenerate them by running the pipeline from the .txt
source.generated-acceptance-tests/
and acceptance-pipeline/ir/ to .gitignore. Never commit generated
artifacts..txt
spec file is newer than its IR or generated test, re-parse and
regenerate before running.No. Specs first, always. The spec-before-code hook will warn about this.
Then the specs need to be more specific. Break the feature into smaller observable behaviors. Each spec should describe one concrete scenario.
This is the perverse incentive. Fight it. The generator should be smart enough to map domain language to system internals. If it can't, improve the generator — don't pollute the specs.
No. Two streams constrain development differently. Acceptance tests alone leave internal structure unchecked. Unit tests alone miss integration.
project-root/
├── specs/ # Acceptance test specs (.txt files)
│ ├── authentication.txt # — committed to git
│ ├── shopping-cart.txt # — committed to git
│ └── ...
├── acceptance-pipeline/ # Pipeline code (parser + generator)
│ ├── parser.* # — committed to git
│ ├── generator.* # — committed to git
│ └── ir/ # — GITIGNORED (generated)
├── generated-acceptance-tests/ # Executable test files
│ └── ... # — GITIGNORED (generated)
└── run-acceptance-tests.sh # Pipeline runner — committed to git
Commit these (source of truth):
specs/*.txt — the acceptance test specsacceptance-pipeline/parser.* — the parser codeacceptance-pipeline/generator.* — the generator coderun-acceptance-tests.sh — the pipeline runner scriptGitignore these (regenerated from source):
acceptance-pipeline/ir/ — intermediate representationsgenerated-acceptance-tests/ — generated test filesAdd to the project's .gitignore:
acceptance-pipeline/ir/
generated-acceptance-tests/
After setting up the pipeline, add an Acceptance Tests section to
the project's CLAUDE.md (or create one if it doesn't exist). This
ensures Claude Code understands the ATDD setup in every session:
## Acceptance Tests
Acceptance tests are `.txt` files in `specs/` in Given/When/Then format.
### Pipeline
.txt → Parser → IR → Generator → executable tests
1. **Parse:** [parse command] — reads `specs/*.txt`, produces IR in `acceptance-pipeline/ir/`
2. **Generate:** [generate command] — reads IR, produces test files in `generated-acceptance-tests/`
3. **Run:** [test command] — executes the generated tests
Full pipeline: `./run-acceptance-tests.sh`
### Rules
- Never modify a spec `.txt` file without explicit permission.
- Never modify generated tests — only delete and regenerate via the pipeline.
- Generated tests and IR files are gitignored — do not commit them.
- Before a push, run the full acceptance test pipeline.
- On failure, report the spec file name and line number.
Adapt the commands and paths to match the project's language and test framework. The pipeline-builder agent generates this CLAUDE.md section automatically when creating the pipeline.