From rpikit
Enforces rigorous Test-Driven Development (TDD) via RED-GREEN-REFACTOR cycle. Requires failing tests before any production code when implementing features or fixing bugs.
npx claudepluginhub bostonaholic/rpikit --plugin rpikitThis skill uses the workspace's default tool permissions.
Write tests first, then implementation. No production code without a failing test.
Guides strict Test-Driven Development (TDD): write failing tests first for features, bugfixes, refactors before any production code. Enforces red-green-refactor cycle.
Enforces strict Test-Driven Development (TDD): write failing test first for features, bug fixes, refactors before any production code.
Share bugs, ideas, or general feedback.
Write tests first, then implementation. No production code without a failing test.
TDD ensures code correctness through disciplined test-first development. Tests written after implementation prove nothing - they pass immediately, providing no evidence the code works correctly. This skill enforces the RED-GREEN-REFACTOR cycle as a non-negotiable practice.
NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST.
If you write code before the test, you must delete it and start over. The test drives the implementation, not the other way around.
Write ONE minimal test that demonstrates the required behavior:
Mandatory verification:
Run the test. Confirm it fails for the RIGHT reason:
- Missing function/method (expected)
- Wrong return value (expected)
- NOT: Syntax error
- NOT: Import error
- NOT: Test framework misconfiguration
If the test passes immediately, you've written it wrong or the feature already exists. Investigate before proceeding.
Write the SIMPLEST code that makes the test pass:
Mandatory verification:
Run the test. Confirm:
- The new test passes
- All other tests still pass
- No new warnings or errors
Improve code quality while keeping tests green:
After each change:
Run all tests. They must still pass.
If any test fails, revert the refactor.
Requirement: Function that validates email addresses
RED:
Write test: expect(isValidEmail("user@example.com")).toBe(true)
Run test: FAIL - isValidEmail is not defined
Correct failure reason: function doesn't exist yet
GREEN:
Write: function isValidEmail(email) { return true; }
Run test: PASS
All tests pass
RED:
Write test: expect(isValidEmail("invalid")).toBe(false)
Run test: FAIL - Expected false, got true
Correct failure reason: no validation logic yet
GREEN:
Write: function isValidEmail(email) { return email.includes("@"); }
Run test: PASS
All tests pass
REFACTOR:
Extract regex pattern to constant
Run tests: PASS
Continue improving...
| Rationalization | Reality |
|---|---|
| "I'll write tests after" | Tests written after pass immediately, proving nothing |
| "This is too simple to test" | Simple things become complex. Test it. |
| "I know this works" | Prove it with a test |
| "Testing slows me down" | Debugging untested code takes longer |
| "I'll just try it manually" | Manual testing isn't repeatable or systematic |
| "The code is obvious" | Make it obviously correct with a test |
| "I already wrote the code" | Delete it. Start with the test. |
When executing plan steps that involve code:
If a plan step doesn't mention tests, add them anyway. TDD is not optional.
Test these explicitly:
Wrong: Write feature, then write tests to cover it Right: Write test, watch it fail, then write feature
Wrong: "I'll add tests in a follow-up PR" Right: Tests are part of the implementation, not separate
Wrong: Assume test will fail, write code immediately Right: Run test, observe failure, understand why
Wrong: Mock every dependency to "isolate" the unit Right: Never mock what you can use for real
Wrong: Test that internal method X calls internal method Y Right: Test that public interface produces correct results
Before marking a step complete: