From tdd-master
This skill should be used when the user asks to "write tests", "add tests", "create test", "implement feature", "fix bug", "TDD", "test-driven", "reproduce bug", "write failing test", "Red-Green-Refactor", or when implementing any new functionality that requires testing. Provides TDD methodology based on Kent Beck and Uncle Bob principles.
npx claudepluginhub spumer/i-m-senior-developer --plugin tdd-masterThis skill uses the workspace's default tool permissions.
Test-Driven Development methodology for writing reliable, maintainable code.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Test-Driven Development methodology for writing reliable, maintainable code.
This skill activates automatically when:
Before coding, create a list of scenarios:
## Test List for [Feature]
- [ ] Happy path: main scenario
- [ ] Edge case: empty input
- [ ] Edge case: boundary values
- [ ] Error case: invalid input
- [ ] Error case: external service failure
Write test that defines expected behavior:
def test__{what}__{scenario}__{outcome}():
# Arrange
order = Order(items=[Item(price=1500)])
# Act
discount = calculate_discount(order)
# Assert
assert discount == Decimal('150')
CRITICAL: Predict HOW the test will fail before running.
Write MINIMUM code to pass:
After test passes:
Next test from list until complete.
| # | Law |
|---|---|
| 1 | Cannot write production code without failing test |
| 2 | Cannot write more test than sufficient to fail |
| 3 | Cannot write more production code than sufficient to pass |
| Principle | Description |
|---|---|
| Fast | Tests run quickly |
| Independent | No test dependencies |
| Repeatable | Same result everywhere |
| Self-validating | Clear pass/fail |
| Timely | Written BEFORE code |
| Role | Defaults | Example |
|---|---|---|
| FK dependency | Minimal for validity | Campaign(name='Test', status=ACTIVE) |
| Entry point | Maximal for usefulness | create_applicant(with_tilda=True, ...) |
| Edge case | Explicitly marked | create_applicant(with_tilda=False) |
# Pattern: test__{what}__{scenario}__{outcome}
def test__calculate_discount__order_over_1000__returns_10_percent(): ...
def test__calculate_discount__empty_order__returns_zero(): ...
def test__process_payment__timeout__raises_error(): ...
def test_get_customer__exists(apibank):
# Setup state
apibank.reg_customer({'code': '123', 'fullName': 'Test'})
# Execute
result = service.get_customer('123')
# Verify
assert result.full_name == 'Test'
def test_threshold__fail(fns_admin, create_income_request):
# Enqueue error response
request = create_income_request(protocol.SmzPlatformError(...))
# Execute
tasks.register_income()
# Verify
request.refresh_from_db()
assert request.state == IncomeRequestState.ERROR
One test = One business flow from input to output.
# GOOD: Full flow
def test__welcome_email__full_flow():
applicant = create_applicant(...) # Input
outbox = create_email_outbox(...) # Step 1
process_email_outbox() # Step 2
assert len(mail.outbox) == 1 # Output
# BAD: Separate tests for each step
def test__create_outbox(): ...
def test__process_outbox(): ...
from testing import AnyDict, ReStr, UnorderedList
# Partial dict matching
assert response == AnyDict({
'status': 'success',
'id': ReStr(r'[a-f0-9-]{36}'),
})
# Order-independent list
assert events == UnorderedList([event1, event2])
# BAD
def test_liar():
assert True # Useless
# BAD: Exception can be swallowed
if not event.wait(timeout=2):
raise TimeoutError('...')
# GOOD: pytest.fail() cannot be caught
if not event.wait(timeout=2):
pytest.fail('Test timeout')
"The most common error is skipping the third step. Refactoring is critical." - Martin Fowler
CRITICAL: Before writing tests, detect which frameworks the project uses and load the corresponding reference documentation.
pyproject.toml, requirements.txt, setup.cfg, Pipfileimport pytest, import django, import unittestpytest.ini, conftest.py, manage.py, settings.py| Detected | Action | Reference File |
|---|---|---|
pytest in dependencies | Read pytest patterns | references/frameworks/pytest.md |
django in dependencies | Read Django patterns | references/frameworks/django.md |
| Both pytest + Django | Read both files | Both files above |
| Neither detected | Use TDD_GUIDE only | references/TDD_GUIDE.md |
Pytest project:
conftest.py existspytest.ini or [tool.pytest] in pyproject.tomlpytest in dependenciesdef test_... without classDjango project:
manage.py existssettings.py or DJANGO_SETTINGS_MODULE envdjango in dependenciespytest-django in dependencies@pytest.mark.django_db in test files1. ALWAYS read: references/TDD_GUIDE.md (core methodology)
2. ALWAYS read: references/P0_DEFAULT_CONTEXT.md (context-adaptive defaults)
3. IF pytest detected: references/frameworks/pytest.md
4. IF django detected: references/frameworks/django.md
references/TDD_GUIDE.md - Core TDD methodology (Kent Beck, Uncle Bob, FIRST principles, anti-patterns)references/P0_DEFAULT_CONTEXT.md - Context-adaptive default values (FK=minimal, entry=maximal, edge=explicit)references/frameworks/pytest.md - Pytest patterns: fixtures, markers, assertion helpers, mocking, conftest organizationreferences/frameworks/django.md - Django patterns: django_db, factory_boy, background tasks, signals, ESB events, timezone testingOther agents can call tdd-master when tests are needed:
pytest.fail() not raise Exception