Use when implementing any feature or bugfix, before writing implementation code - write the test first, watch it fail, write minimal code to pass; ensures tests actually verify behavior by requiring failure first
Enforces Test-Driven Development by requiring failing tests before implementation and verifying both failure and success.
npx claudepluginhub bacchus-labs/wranglerThis skill inherits all available tools. When active, it can use any tool Claude has access to.
assets/workflow-checklist.mdreferences/detailed-guide.mdWrite the test first. Watch it fail. Write minimal code to pass.
Core principle: If you didn't watch the test fail, you don't know if it tests the right thing.
Violating the letter of the rules is violating the spirit of the rules.
Always:
Exceptions (ask your human partner):
Thinking "skip TDD just this once"? Stop. That's rationalization.
NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST
Write code before the test? Delete it. Start over.
No exceptions:
Implement fresh from tests. Period.
digraph tdd_cycle {
rankdir=LR;
red [label="RED\nWrite failing test", shape=box, style=filled, fillcolor="#ffcccc"];
verify_red [label="Verify fails\ncorrectly", shape=diamond];
green [label="GREEN\nMinimal code", shape=box, style=filled, fillcolor="#ccffcc"];
verify_green [label="Verify passes\nAll green", shape=diamond];
refactor [label="REFACTOR\nClean up", shape=box, style=filled, fillcolor="#ccccff"];
next [label="Next", shape=ellipse];
red -> verify_red;
verify_red -> green [label="yes"];
verify_red -> red [label="wrong\nfailure"];
green -> verify_green;
verify_green -> refactor [label="yes"];
verify_green -> green [label="no"];
refactor -> verify_green [label="stay\ngreen"];
verify_green -> next;
next -> red;
}
Write one minimal test showing what should happen.
<Good> ```typescript test('retries failed operations 3 times', async () => { let attempts = 0; const operation = () => { attempts++; if (attempts < 3) throw new Error('fail'); return 'success'; };const result = await retryOperation(operation);
expect(result).toBe('success'); expect(attempts).toBe(3); });
Clear name, tests real behavior, one thing
</Good>
<Bad>
```typescript
test('retry works', async () => {
const mock = jest.fn()
.mockRejectedValueOnce(new Error())
.mockRejectedValueOnce(new Error())
.mockResolvedValueOnce('success');
await retryOperation(mock);
expect(mock).toHaveBeenCalledTimes(3);
});
Vague name, tests mock not code </Bad>
Requirements:
BEFORE proceeding to GREEN phase:
Execute test command:
npm test -- path/to/test.test.ts
# or
pytest path/to/test.py::test_function_name
# or
cargo test test_function_name
Copy full output showing failure
Verify failure message matches expected reason:
If output doesn't match expected failure: Fix test and re-run
YOU MUST include test output in your message:
Running RED phase verification:
$ npm test -- retry.test.ts
FAIL tests/retry.test.ts
✕ retries failed operations 3 times (2 ms)
● retries failed operations 3 times
ReferenceError: retryOperation is not defined
at Object.<anonymous> (tests/retry.test.ts:15:5)
Test Suites: 1 failed, 1 total
Tests: 1 failed, 1 total
Time: 0.234s
Exit code: 1
This is the expected failure - function doesn't exist yet.
Failure reason matches expectation: "retryOperation is not defined"
Proceeding to GREEN phase.
Claims without evidence violate verifying-before-completion.
If you cannot provide this output, you have NOT completed the RED phase.
Write simplest code to pass the test.
<Good> ```typescript async function retryOperation<T>(fn: () => Promise<T>): Promise<T> { for (let i = 0; i < 3; i++) { try { return await fn(); } catch (e) { if (i === 2) throw e; } } throw new Error('unreachable'); } ``` Just enough to pass </Good> <Bad> ```typescript async function retryOperation<T>( fn: () => Promise<T>, options?: { maxRetries?: number; backoff?: 'linear' | 'exponential'; onRetry?: (attempt: number) => void; } ): Promise<T> { // YAGNI } ``` Over-engineered </Bad>Don't add features, refactor other code, or "improve" beyond the test.
AFTER implementing minimal code:
Execute test command (same as RED):
npm test -- path/to/test.test.ts
Copy full output showing pass
Verify ALL of these:
YOU MUST include test output in your message:
Running GREEN phase verification:
$ npm test -- retry.test.ts
PASS tests/retry.test.ts
✓ retries failed operations 3 times (145 ms)
Test Suites: 1 passed, 1 total
Tests: 1 passed, 1 total
Time: 0.189s
Exit code: 0
Test now passes. Proceeding to REFACTOR phase.
If any errors/warnings appear: Fix them before claiming GREEN phase complete.
Claims without evidence violate verifying-before-completion.
If you cannot provide this output, you have NOT completed the GREEN phase.
After green only:
Keep tests green. Don't add behavior.
Next failing test for next feature.
| Quality | Good | Bad |
|---|---|---|
| Minimal | One thing. "and" in name? Split it. | test('validates email and domain and whitespace') |
| Clear | Name describes behavior | test('test1') |
| Shows intent | Demonstrates desired API | Obscures what code should do |
"I'll write tests after to verify it works"
Tests written after code pass immediately. Passing immediately proves nothing:
Test-first forces you to see the test fail, proving it actually tests something.
"I already manually tested all the edge cases"
Manual testing is ad-hoc. You think you tested everything but:
Automated tests are systematic. They run the same way every time.
"Deleting X hours of work is wasteful"
Sunk cost fallacy. The time is already gone. Your choice now:
The "waste" is keeping code you can't trust. Working code without real tests is technical debt.
"TDD is dogmatic, being pragmatic means adapting"
TDD IS pragmatic:
"Pragmatic" shortcuts = debugging in production = slower.
"Tests after achieve the same goals - it's spirit not ritual"
No. Tests-after answer "What does this do?" Tests-first answer "What should this do?"
Tests-after are biased by your implementation. You test what you built, not what's required. You verify remembered edge cases, not discovered ones.
Tests-first force edge case discovery before implementing. Tests-after verify you remembered everything (you didn't).
30 minutes of tests after ≠ TDD. You get coverage, lose proof tests work.
| Excuse | Reality |
|---|---|
| "Too simple to test" | Simple code breaks. Test takes 30 seconds. |
| "I'll test after" | Tests passing immediately prove nothing. |
| "Tests after achieve same goals" | Tests-after = "what does this do?" Tests-first = "what should this do?" |
| "Already manually tested" | Ad-hoc ≠ systematic. No record, can't re-run. |
| "Deleting X hours is wasteful" | Sunk cost fallacy. Keeping unverified code is technical debt. |
| "Keep as reference, write tests first" | You'll adapt it. That's testing after. Delete means delete. |
| "Need to explore first" | Fine. Throw away exploration, start with TDD. |
| "Test hard = design unclear" | Listen to test. Hard to test = hard to use. |
| "TDD will slow me down" | TDD faster than debugging. Pragmatic = test-first. |
| "Manual test faster" | Manual doesn't prove edge cases. You'll re-test every change. |
| "Existing code has no tests" | You're improving it. Add tests for existing code. |
Counter: Your imagination is not evidence. Run the actual command and paste the output.
Counter: Non-obvious bugs exist. Provide output or you didn't verify.
Counter: That's a claim without evidence. Violation of verifying-before-completion. Show output.
All of these mean: Delete code. Start over with TDD.
Bug: Empty email accepted
RED
test('rejects empty email', async () => {
const result = await submitForm({ email: '' });
expect(result.error).toBe('Email required');
});
Verify RED
$ npm test
FAIL: expected 'Email required', got undefined
GREEN
function submitForm(data: FormData) {
if (!data.email?.trim()) {
return { error: 'Email required' };
}
// ...
}
Verify GREEN
$ npm test
PASS
REFACTOR Extract validation for multiple fields if needed.
BEFORE claiming work complete, certify TDD compliance:
For each new function/method implemented:
Requirements:
Why this matters:
Cross-reference: See verifying-before-completion skill for complete requirements.
Frontend testing has special cases:
First run generates baseline (special case):
Copy this checklist to track your progress:
See assets/workflow-checklist.md for the complete checklist.
For detailed information, see:
references/detailed-guide.md - Complete workflow details, examples, and troubleshooting