Use when implementing any feature or bugfix, before writing implementation code - write the test first, watch it fail, write minimal code to pass; ensures tests actually verify behavior by requiring failure first
When implementing features or bugfixes, write the test first and run it to verify it fails correctly. Then write minimal code to make it pass.
/plugin marketplace add samjhecht/wrangler/plugin install wrangler@samjhecht-pluginsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
MANDATORY: When using this skill, announce it at the start with:
š§ Using Skill: test-driven-development | [brief purpose based on context]
Example:
š§ Using Skill: test-driven-development | [Provide context-specific example of what you're doing]
This creates an audit trail showing which skills were applied during the session.
Write the test first. Watch it fail. Write minimal code to pass.
Core principle: If you didn't watch the test fail, you don't know if it tests the right thing.
Violating the letter of the rules is violating the spirit of the rules.
Always:
Exceptions (ask your human partner):
Thinking "skip TDD just this once"? Stop. That's rationalization.
NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST
Write code before the test? Delete it. Start over.
No exceptions:
Implement fresh from tests. Period.
digraph tdd_cycle {
rankdir=LR;
red [label="RED\nWrite failing test", shape=box, style=filled, fillcolor="#ffcccc"];
verify_red [label="Verify fails\ncorrectly", shape=diamond];
green [label="GREEN\nMinimal code", shape=box, style=filled, fillcolor="#ccffcc"];
verify_green [label="Verify passes\nAll green", shape=diamond];
refactor [label="REFACTOR\nClean up", shape=box, style=filled, fillcolor="#ccccff"];
next [label="Next", shape=ellipse];
red -> verify_red;
verify_red -> green [label="yes"];
verify_red -> red [label="wrong\nfailure"];
green -> verify_green;
verify_green -> refactor [label="yes"];
verify_green -> green [label="no"];
refactor -> verify_green [label="stay\ngreen"];
verify_green -> next;
next -> red;
}
Write one minimal test showing what should happen.
<Good> ```typescript test('retries failed operations 3 times', async () => { let attempts = 0; const operation = () => { attempts++; if (attempts < 3) throw new Error('fail'); return 'success'; };const result = await retryOperation(operation);
expect(result).toBe('success'); expect(attempts).toBe(3); });
Clear name, tests real behavior, one thing
</Good>
<Bad>
```typescript
test('retry works', async () => {
const mock = jest.fn()
.mockRejectedValueOnce(new Error())
.mockRejectedValueOnce(new Error())
.mockResolvedValueOnce('success');
await retryOperation(mock);
expect(mock).toHaveBeenCalledTimes(3);
});
Vague name, tests mock not code </Bad>
Requirements:
BEFORE proceeding to GREEN phase:
Execute test command:
npm test -- path/to/test.test.ts
# or
pytest path/to/test.py::test_function_name
# or
cargo test test_function_name
Copy full output showing failure
Verify failure message matches expected reason:
If output doesn't match expected failure: Fix test and re-run
YOU MUST include test output in your message:
Running RED phase verification:
$ npm test -- retry.test.ts
FAIL tests/retry.test.ts
ā retries failed operations 3 times (2 ms)
ā retries failed operations 3 times
ReferenceError: retryOperation is not defined
at Object.<anonymous> (tests/retry.test.ts:15:5)
Test Suites: 1 failed, 1 total
Tests: 1 failed, 1 total
Time: 0.234s
Exit code: 1
This is the expected failure - function doesn't exist yet.
Failure reason matches expectation: "retryOperation is not defined"
Proceeding to GREEN phase.
Claims without evidence violate verification-before-completion.
If you cannot provide this output, you have NOT completed the RED phase.
Write simplest code to pass the test.
<Good> ```typescript async function retryOperation<T>(fn: () => Promise<T>): Promise<T> { for (let i = 0; i < 3; i++) { try { return await fn(); } catch (e) { if (i === 2) throw e; } } throw new Error('unreachable'); } ``` Just enough to pass </Good> <Bad> ```typescript async function retryOperation<T>( fn: () => Promise<T>, options?: { maxRetries?: number; backoff?: 'linear' | 'exponential'; onRetry?: (attempt: number) => void; } ): Promise<T> { // YAGNI } ``` Over-engineered </Bad>Don't add features, refactor other code, or "improve" beyond the test.
AFTER implementing minimal code:
Execute test command (same as RED):
npm test -- path/to/test.test.ts
Copy full output showing pass
Verify ALL of these:
YOU MUST include test output in your message:
Running GREEN phase verification:
$ npm test -- retry.test.ts
PASS tests/retry.test.ts
ā retries failed operations 3 times (145 ms)
Test Suites: 1 passed, 1 total
Tests: 1 passed, 1 total
Time: 0.189s
Exit code: 0
Test now passes. Proceeding to REFACTOR phase.
If any errors/warnings appear: Fix them before claiming GREEN phase complete.
Claims without evidence violate verification-before-completion.
If you cannot provide this output, you have NOT completed the GREEN phase.
After green only:
Keep tests green. Don't add behavior.
Next failing test for next feature.
| Quality | Good | Bad |
|---|---|---|
| Minimal | One thing. "and" in name? Split it. | test('validates email and domain and whitespace') |
| Clear | Name describes behavior | test('test1') |
| Shows intent | Demonstrates desired API | Obscures what code should do |
"I'll write tests after to verify it works"
Tests written after code pass immediately. Passing immediately proves nothing:
Test-first forces you to see the test fail, proving it actually tests something.
"I already manually tested all the edge cases"
Manual testing is ad-hoc. You think you tested everything but:
Automated tests are systematic. They run the same way every time.
"Deleting X hours of work is wasteful"
Sunk cost fallacy. The time is already gone. Your choice now:
The "waste" is keeping code you can't trust. Working code without real tests is technical debt.
"TDD is dogmatic, being pragmatic means adapting"
TDD IS pragmatic:
"Pragmatic" shortcuts = debugging in production = slower.
"Tests after achieve the same goals - it's spirit not ritual"
No. Tests-after answer "What does this do?" Tests-first answer "What should this do?"
Tests-after are biased by your implementation. You test what you built, not what's required. You verify remembered edge cases, not discovered ones.
Tests-first force edge case discovery before implementing. Tests-after verify you remembered everything (you didn't).
30 minutes of tests after ā TDD. You get coverage, lose proof tests work.
| Excuse | Reality |
|---|---|
| "Too simple to test" | Simple code breaks. Test takes 30 seconds. |
| "I'll test after" | Tests passing immediately prove nothing. |
| "Tests after achieve same goals" | Tests-after = "what does this do?" Tests-first = "what should this do?" |
| "Already manually tested" | Ad-hoc ā systematic. No record, can't re-run. |
| "Deleting X hours is wasteful" | Sunk cost fallacy. Keeping unverified code is technical debt. |
| "Keep as reference, write tests first" | You'll adapt it. That's testing after. Delete means delete. |
| "Need to explore first" | Fine. Throw away exploration, start with TDD. |
| "Test hard = design unclear" | Listen to test. Hard to test = hard to use. |
| "TDD will slow me down" | TDD faster than debugging. Pragmatic = test-first. |
| "Manual test faster" | Manual doesn't prove edge cases. You'll re-test every change. |
| "Existing code has no tests" | You're improving it. Add tests for existing code. |
Counter: Your imagination is not evidence. Run the actual command and paste the output.
Counter: Non-obvious bugs exist. Provide output or you didn't verify.
Counter: That's a claim without evidence. Violation of verification-before-completion. Show output.
All of these mean: Delete code. Start over with TDD.
Bug: Empty email accepted
RED
test('rejects empty email', async () => {
const result = await submitForm({ email: '' });
expect(result.error).toBe('Email required');
});
Verify RED
$ npm test
FAIL: expected 'Email required', got undefined
GREEN
function submitForm(data: FormData) {
if (!data.email?.trim()) {
return { error: 'Email required' };
}
// ...
}
Verify GREEN
$ npm test
PASS
REFACTOR Extract validation for multiple fields if needed.
BEFORE claiming work complete, certify TDD compliance:
For each new function/method implemented:
Requirements:
Why this matters:
Cross-reference: See verification-before-completion skill for complete requirements.
Frontend testing has special cases:
First run generates baseline (special case):
How this follows TDD:
See frontend-visual-regression-testing skill for details.
Build incrementally (recommended):
Alternative: Skeleton approach
See frontend-e2e-user-journeys skill for detailed approaches.
Standard TDD applies:
No special cases for component tests - follow standard RED-GREEN-REFACTOR.
Before marking work complete:
Can't check all boxes? You skipped TDD. Start over.
| Problem | Solution |
|---|---|
| Don't know how to test | Write wished-for API. Write assertion first. Ask your human partner. |
| Test too complicated | Design too complicated. Simplify interface. |
| Must mock everything | Code too coupled. Use dependency injection. |
| Test setup huge | Extract helpers. Still complex? Simplify design. |
Bug found? Write failing test reproducing it. Follow TDD cycle. Test proves fix and prevents regression.
Never fix bugs without a test.
Production code ā test exists and failed first
Otherwise ā not TDD
No exceptions without your human partner's permission.
The evidence requirements in RED and GREEN phases integrate with verification-before-completion:
See verification-before-completion skill for complete certification requirements.
Optimize Bazel builds for large-scale monorepos. Use when configuring Bazel, implementing remote execution, or optimizing build performance for enterprise codebases.