Use when implementing any feature or bugfix, before writing implementation code
Enforces test-driven development by writing failing tests before any implementation code.
npx claudepluginhub harmaalbers/claude-requirements-frameworkThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/testing-anti-patterns.mdWrite the test first. Watch it fail. Write minimal code to pass.
Core principle: If you didn't watch the test fail, you don't know if it tests the right thing.
Violating the letter of the rules is violating the spirit of the rules.
Always:
Exceptions (ask your human partner):
Thinking "skip TDD just this once"? Stop. That's rationalization.
NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST
Write code before the test? Delete it. Start over.
No exceptions:
Implement fresh from tests. Period.
digraph tdd_cycle {
rankdir=LR;
red [label="RED\nWrite failing test", shape=box, style=filled, fillcolor="#ffcccc"];
verify_red [label="Verify fails\ncorrectly", shape=diamond];
green [label="GREEN\nMinimal code", shape=box, style=filled, fillcolor="#ccffcc"];
verify_green [label="Verify passes\nAll green", shape=diamond];
refactor [label="REFACTOR\nClean up", shape=box, style=filled, fillcolor="#ccccff"];
next [label="Next", shape=ellipse];
red -> verify_red;
verify_red -> green [label="yes"];
verify_red -> red [label="wrong\nfailure"];
green -> verify_green;
verify_green -> refactor [label="yes"];
verify_green -> green [label="no"];
refactor -> verify_green [label="stay\ngreen"];
verify_green -> next;
next -> red;
}
Write one minimal test showing what should happen.
<Good> ```python def test_retries_failed_operations_3_times(): attempts = 0def operation():
nonlocal attempts
attempts += 1
if attempts < 3:
raise RuntimeError("fail")
return "success"
result = retry_operation(operation)
assert result == "success"
assert attempts == 3
Clear name, tests real behavior, one thing
</Good>
<Bad>
```python
def test_retry_works(mocker):
mock_op = mocker.Mock(side_effect=[RuntimeError(), RuntimeError(), "success"])
retry_operation(mock_op)
assert mock_op.call_count == 3
Vague name, tests mock not code </Bad>
Requirements:
MANDATORY. Never skip.
pytest tests/path/to/test.py::test_name -v
Confirm:
Test passes? You're testing existing behavior. Fix test.
Test errors? Fix error, re-run until it fails correctly.
Write simplest code to pass the test.
<Good> ```python def retry_operation(fn, max_retries=3): for i in range(max_retries): try: return fn() except Exception: if i == max_retries - 1: raise ``` Just enough to pass </Good> <Bad> ```python def retry_operation( fn, max_retries=3, backoff="linear", on_retry=None, timeout=None, ): # YAGNI — over-engineered ... ``` Over-engineered </Bad>Don't add features, refactor other code, or "improve" beyond the test.
MANDATORY.
pytest tests/path/to/test.py -v
Confirm:
Test fails? Fix code, not test.
Other tests fail? Fix now.
After green only:
Keep tests green. Don't add behavior.
Next failing test for next feature.
| Quality | Good | Bad |
|---|---|---|
| Minimal | One thing. "and" in name? Split it. | test_validates_email_and_domain_and_whitespace |
| Clear | Name describes behavior | test_test1 |
| Shows intent | Demonstrates desired API | Obscures what code should do |
"I'll write tests after to verify it works"
Tests written after code pass immediately. Passing immediately proves nothing:
Test-first forces you to see the test fail, proving it actually tests something.
"I already manually tested all the edge cases"
Manual testing is ad-hoc. You think you tested everything but:
Automated tests are systematic. They run the same way every time.
"Deleting X hours of work is wasteful"
Sunk cost fallacy. The time is already gone. Your choice now:
The "waste" is keeping code you can't trust. Working code without real tests is technical debt.
"TDD is dogmatic, being pragmatic means adapting"
TDD IS pragmatic:
"Pragmatic" shortcuts = debugging in production = slower.
| Excuse | Reality |
|---|---|
| "Too simple to test" | Simple code breaks. Test takes 30 seconds. |
| "I'll test after" | Tests passing immediately prove nothing. |
| "Tests after achieve same goals" | Tests-after = "what does this do?" Tests-first = "what should this do?" |
| "Already manually tested" | Ad-hoc ≠ systematic. No record, can't re-run. |
| "Deleting X hours is wasteful" | Sunk cost fallacy. Keeping unverified code is technical debt. |
| "Keep as reference, write tests first" | You'll adapt it. That's testing after. Delete means delete. |
| "Need to explore first" | Fine. Throw away exploration, start with TDD. |
| "Test hard = design unclear" | Listen to test. Hard to test = hard to use. |
| "TDD will slow me down" | TDD faster than debugging. Pragmatic = test-first. |
| "Manual test faster" | Manual doesn't prove edge cases. You'll re-test every change. |
| "Existing code has no tests" | You're improving it. Add tests for existing code. |
All of these mean: Delete code. Start over with TDD.
Bug: Empty email accepted
RED
def test_rejects_empty_email():
result = submit_form({"email": ""})
assert result["error"] == "Email required"
Verify RED
$ pytest tests/test_form.py::test_rejects_empty_email -v
FAIL: AssertionError: assert None == 'Email required'
GREEN
def submit_form(data: dict) -> dict:
if not data.get("email", "").strip():
return {"error": "Email required"}
# ...
Verify GREEN
$ pytest tests/test_form.py -v
PASS
REFACTOR Extract validation for multiple fields if needed.
pytest fixtures for shared test setup:
@pytest.fixture
def sample_config():
return {"key": "value", "nested": {"a": 1}}
pytest.parametrize for testing multiple inputs:
@pytest.mark.parametrize("email,expected", [
("", "Email required"),
("invalid", "Invalid email format"),
("valid@example.com", None),
])
def test_email_validation(email, expected):
result = validate_email(email)
assert result == expected
pytest.raises for testing exceptions:
def test_invalid_config_raises():
with pytest.raises(ValueError, match="missing required field"):
load_config({})
Before marking work complete:
Can't check all boxes? You skipped TDD. Start over.
| Problem | Solution |
|---|---|
| Don't know how to test | Write wished-for API. Write assertion first. Ask your human partner. |
| Test too complicated | Design too complicated. Simplify interface. |
| Must mock everything | Code too coupled. Use dependency injection. |
| Test setup huge | Extract helpers. Still complex? Simplify design. |
Bug found? Write failing test reproducing it. Follow TDD cycle. Test proves fix and prevents regression.
Never fix bugs without a test.
When adding mocks or test utilities, read @testing-anti-patterns.md in this skill's references to avoid common pitfalls:
Production code → test exists and failed first
Otherwise → not TDD
No exceptions without your human partner's permission.
When this skill completes, it auto-satisfies the tdd_planned requirement. This means tools that require TDD planning will no longer be blocked.
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.