From cape
Scan a directory or module for untested behavior and create br tasks per gap found. Make sure to use this skill whenever the user mentions test gaps, missing tests, untested code, test coverage for a specific scope, or wants to know what's not tested before a refactor or after shipping a feature. Triggers on any of these patterns: "find test gaps", "what's untested", "check test coverage", "improve tests for", "we need tests for", "what's not covered", "test completeness", "missing test cases", pointing at a directory and asking about its tests, mentioning they shipped something and want to verify test completeness, or preparing for a refactor and wanting safety nets. This skill is specifically about FINDING gaps (static analysis, source-to-test mapping, bug risk assessment) — not about writing tests (use test-driven-development), auditing existing test quality (use analyze-tests), debugging test failures (use debug-issue), or running a test suite. Even if the request seems simple, use this skill — it provides structured br output with per-module tasks that plain analysis does not.
npx claudepluginhub sqve/cape --plugin capeThis skill uses the workspace's default tool permissions.
<skill_overview> Scan a user-specified scope for source code that lacks meaningful test coverage.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
<skill_overview> Scan a user-specified scope for source code that lacks meaningful test coverage. Map source files to test files, read public APIs, and identify behaviors that ship untested. Output a br epic with one task per module that needs tests.
Core contract: every gap found must explain what bug it would catch. "This function has no test" is not a gap — "this function silently returns nil on malformed input and no test verifies the error path" is. Tests exist to catch bugs, not to hit coverage numbers. </skill_overview>
<rigidity_level> MEDIUM FREEDOM — The scope resolution and br output format are rigid. How deep the analysis goes within each module adapts to the module's complexity and risk. The value filter (skip trivial code) is non-negotiable. </rigidity_level>
<when_to_use>
Don't use for:
cape:analyze-tests)cape:test-driven-development)cape:debug-issue)</when_to_use>
<critical_rules>
</critical_rules>
<the_process>
Run cape check to establish a baseline. If exit code is non-zero, stop — do not proceed. Read
checkResults from JSON output and report entries where passed: false. Find gaps only when the
existing suite is green.
The user specifies what to analyze. If their message doesn't include a clear scope, ask:
What should I analyze? Examples:
- A directory: src/auth/
- A module: the payment processing module
- A feature area: everything related to session management
Once scope is clear, use code-review-graph to build structural context before reading files. See
resources/graph-tools-instructions.md for the full tool catalog and fallback behavior.
Graph queries to run (in order):
get_impact_radius_tool on the scope's production files to prioritize which gaps matter most —
high-impact code (many callers) deserves more scrutinyquery_graph_tool with tests_for to find existing test-to-source mappingssemantic_search_nodes_tool to find related test utilities and fixturesPresent the mapping:
Scope: src/auth/
Test convention: co-located, .test.ts suffix
Framework: vitest
Source files:
src/auth/login.ts → src/auth/login.test.ts (exists)
src/auth/session.ts → src/auth/session.test.ts (exists)
src/auth/permissions.ts → (no test file)
src/auth/types.ts → (skipped: type definitions)
For each source file in scope, determine its test status. Work through files in parallel where possible using subagents.
Use isTrivialFile from @cape/cli to skip files that don't warrant tests:
Explicitly note skipped files and why. The user should see what was excluded so they can override if they disagree.
For each non-trivial source file:
Read the source code. Understand what it does — its public API, branching logic, error handling, side effects, and state changes.
Read the test file (if it exists). Understand what's actually tested — which functions, which inputs, which paths.
Identify untested behavior. Compare what the source does against what the tests verify. Focus on:
Assess each gap's risk. Ask: "If this code broke, what would happen?" A bug in payment calculation is P1. A bug in a log formatter is P3. Skip gaps where the realistic bug risk is negligible.
Every gap must answer: "What bug would this test catch?"
Report:
permissions.ts:hasPermission() — no test verifies behavior when the permission list is empty.
Bug risk: user with no permissions could be granted access if the empty-array check is wrong.
Don't report:
permissions.ts:hasPermission() — function has no test.
The first is actionable. The second is coverage porn.
Group gaps by module. For each module, show:
### src/auth/permissions.ts — no test file
Public API: hasPermission(), getRoles(), validateScope()
Gaps:
1. hasPermission() — no test for empty permission list. Bug risk: unauthorized access.
2. hasPermission() — no test for unknown permission strings. Bug risk: silent pass-through.
3. validateScope() — no test for malformed scope strings. Bug risk: unhandled exception in
middleware.
Skipped: getRoles() — thin wrapper around database query, tested transitively through
integration tests.
### src/auth/login.ts — test file exists, partial coverage
Tested: login() happy path, invalid password
Gaps:
1. login() — no test for account lockout after failed attempts. Bug risk: brute force possible.
2. login() — no test for concurrent login from multiple sessions. Bug risk: session corruption.
Self-review before presenting: Walk through each gap and verify it has a realistic, non-trivial bug scenario. Drop gaps where the "bug risk" is speculative or the code is too simple to fail meaningfully. Promote any gaps you initially skipped if re-reading the source reveals real risk.
After presenting all modules, explicitly ask the user before proceeding:
Found [N] gaps across [M] modules. Create a br epic with tasks for these? I can also
drop any gaps you think aren't worth tracking.
STOP here. You MUST wait for explicit user approval before creating br items. Do not call
br create until the user responds. The user may want to drop gaps, reprioritize, or skip br
entirely.
After user approval, create a br epic and one task per module.
Create a br epic following this template:
!cat "${CLAUDE_SKILL_DIR}/../write-plan/resources/epic-template.md"
Populate Requirements from the identified gaps and bug risks, Anti-patterns from common test
anti-patterns observed in scope, and Success criteria from the gap closure targets. Use
--type epic --priority 2. Run cape br validate <epic-id> after creation.
br create "Add missing tests for [module name]" \
--type task \
--parent <epic-id> \
--priority <assessed-priority> \
--labels "find-test-gaps" \
--description "$(cat <<'EOF'
## Goal
Close [N] test gaps in [file path].
## Gaps
1. [function] — [untested behavior]. Bug risk: [what breaks].
2. [function] — [untested behavior]. Bug risk: [what breaks].
## Implementation
- Test file: [path to test file, existing or new]
- Framework: [framework]
- Follow [specific patterns from existing tests in the project]
## Success criteria
- [ ] [function]: test verifies [specific behavior]
- [ ] [function]: test verifies [specific behavior]
- [ ] All tests fail when the described behavior breaks (not tautological)
EOF
)"
cape br validate <task-id>
Present the created epic and tasks, then suggest cape:execute-plan to start implementing.
</the_process>
User asks to find test gaps in a specific directoryUser: "Find test gaps in src/auth/"
Wrong: Run a coverage tool, report "src/auth/ has 67% line coverage", suggest adding tests until it hits 80%. This is coverage porn — it doesn't tell you what bugs the missing tests would catch.
Right:
User: "We just shipped the new webhook system in src/webhooks/. What's untested?"
Wrong: Report that 3 of 5 files have no test file. Suggest creating test files for all of them, including the types file and the re-export barrel. Create 5 tasks.
Right:
User: "Check if src/billing/invoice.ts has good test coverage"
Wrong: "invoice.test.ts exists and has 12 tests. Looks covered." Having a test file doesn't mean the important behavior is tested.
Right:
<key_principles>
</key_principles>