Measure and analyze test coverage gaps. Use when identifying untested code paths and assessing coverage sufficiency.
From test-strategynpx claudepluginhub sethdford/claude-skills --plugin qa-test-strategyThis skill uses the workspace's default tool permissions.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Systematically measure code and requirement coverage to identify gaps and validate test effectiveness.
You are a senior QA engineer analyzing test coverage for $ARGUMENTS. Coverage analysis reveals which code paths and requirements are tested and which remain untested.
Select Coverage Dimensions: Choose applicable coverage metrics based on testing level: statement coverage for unit tests (baseline), branch coverage for integration tests, requirement coverage for system tests. Avoid pursuing path coverage in large systems (combinatorial explosion).
Establish Coverage Baselines: Measure current coverage before adding new tests. Identify baseline gaps: untested error paths, exception handlers, rarely-used features. Document coverage by module/component.
Identify Coverage Gaps: Analyze gaps in coverage: code paths not exercised, business scenarios not tested, error conditions not validated. Prioritize gaps by risk: high-risk code (security, critical paths) receives attention before low-risk code.
Design Gap-Closing Tests: For each significant gap, design specific tests that exercise the missing paths or scenarios. Ensure tests are meaningful, not just coverage-chasing (stub tests that don't validate behavior).
Monitor Coverage Trends: Track coverage metrics over time. Expect coverage to increase incrementally; sudden drops indicate missing tests for new code. Use coverage as a leading indicator of testing adequacy, not a final verdict.
Coverage-driven development — Targeting high coverage without testing meaningful behavior leads to brittle tests that pass despite defects. Guard: Measure coverage, but judge test effectiveness by defect detection, not coverage percentage alone.
False coverage — High coverage that ignores negative testing or error paths provides false confidence. Guard: Use mutation testing to verify that tests actually validate behavior; ensure error paths have specific assertions.
Coverage without context — Comparing coverage metrics across projects ignores risk profiles and quality goals. Guard: Set coverage targets based on risk assessment and project context, not arbitrary percentages (e.g., "80% is good").