From cape
Drive behavioral changes with tests that are written before the production code they justify. Use this skill whenever implementing a feature, fixing a bug, adding behavior, or changing logic that automated tests can verify. Also use when another cape skill (fix-bug, execute-plan) says to follow TDD. Do NOT use for: verification testing (manual run-the-app checks), documentation changes, configuration changes, or refactoring that has no behavioral change.
npx claudepluginhub sqve/cape --plugin capeThis skill uses the workspace's default tool permissions.
<skill_overview> Let the next test define the next code change. Write a test that exposes the
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
<skill_overview> Let the next test define the next code change. Write a test that exposes the missing behavior, make it pass with the simplest change, then decide whether small cleanup would improve clarity. </skill_overview>
<rigidity_level> MEDIUM FREEDOM — Test-first and behavior-focused are rigid; test shape, scope, and whether cleanup is worthwhile adapt to context. </rigidity_level>
<when_to_use>
Don't use for:
cape:refactor)</when_to_use>
<critical_rules>
</critical_rules>
<the_process>
Before changing code, verify the project has working test infrastructure. Run cape check or the
relevant project test command. If tests cannot run, stop and tell the user — do not bootstrap a
framework yourself.
Identify the test framework and conventions from existing test files. Match them exactly: file naming, assertion style, test structure, and helper patterns.
Pick the next missing behavior. Write or update the smallest test that demonstrates it. For bug fixes, the test should reproduce the bug before the fix.
Run the relevant test. The failure should show that the behavior is missing or incorrect, not that the test file is broken. If the failure comes from syntax, import, setup, or other pre-assertion problems, fix the test first.
When helpful, dispatch cape:test-runner to run the focused test or capture failure output.
Write the simplest production change that satisfies the test. Re-run the relevant test, then the broader affected suite as needed to confirm nothing else broke.
After the behavior is covered, look briefly for obvious cleanup: duplication, confusing names, or awkward structure introduced by the minimal change. If a small refactor would help, do it and re-run tests. If not, move on to the next behavior.
</the_process>
<agent_references>
cape:test-runner protocol:Use when helpful: focused test runs, broader suite confirmation, or capturing detailed failure output without polluting context.
Pass: test command and working directory. Expect back: pass/fail status with counts and complete failure output for any failing tests.
</agent_references>
Adding duplicate-email validation to user creationWrong:
1. Add duplicate-email handling in the service
2. Write several tests for duplicates, invalid formats, and future edge cases
3. Run the suite once at the end
The code changed before any test proved the gap, and the tests were batched around multiple behaviors.
Right:
1. Add or update one test that shows duplicate emails are currently accepted
2. Run that test and confirm it fails for the missing duplicate check
3. Add the smallest guard in user creation to reject duplicates
4. Re-run the focused test, then the broader suite
5. If the new code is awkward, do a small cleanup; otherwise move to the next behavior
<key_principles>
</key_principles>
<anti_batching>
The strongest pull is to write every foreseeable test up front, especially in a new file. Resist that. Start with the next behavior only, then let the result of that change inform what to test next.
Self-check before saving test code: Am I adding tests for the current behavior, or am I front-loading future cases because I can already imagine them?
</anti_batching>