Analyze test coverage and identify gaps with actionable recommendations
Analyzes test coverage by running real tools and identifies gaps with specific test recommendations.
/plugin marketplace add sequenzia/agent-alchemy/plugin install agent-alchemy-tdd-tools@agent-alchemyThis skill is limited to using the following tools:
references/coverage-patterns.mdAnalyze test coverage for any project by running real coverage tools, parsing results into structured reports, and identifying gaps with actionable recommendations. Optionally maps coverage against spec acceptance criteria to find untested requirements.
CRITICAL: Complete ALL 6 phases. The workflow is not complete until Phase 6: Suggest Next Steps is finished. After completing each phase, immediately proceed to the next phase without waiting for user prompts.
IMPORTANT: You MUST use the AskUserQuestion tool for ALL questions to the user. Never ask questions through regular text output.
Text output should only be used for presenting information, summaries, reports, and progress updates.
NEVER do this (asking via text output):
Which package should I measure coverage for?
1. src/
2. lib/
ALWAYS do this (using AskUserQuestion tool):
AskUserQuestion:
questions:
- header: "Coverage Target"
question: "Which package or directory should coverage be measured for?"
options:
- label: "src/"
description: "Main source directory"
- label: "lib/"
description: "Library directory"
multiSelect: false
Goal: Auto-detect the project's test framework, coverage tool, and source package.
Analyze $ARGUMENTS to extract optional parameters:
--spec <path>: Spec file for acceptance criteria mapping--threshold <percentage>: Coverage threshold override (default: 80%)Determine the project type by checking for language-specific files:
Python detection (check in order):
pyproject.toml exists -> Python projectsetup.py or setup.cfg exists -> Python projectrequirements.txt exists -> Python project*.py files in source directories -> Python projectTypeScript/JavaScript detection (check in order):
package.json exists -> TypeScript/JavaScript projecttsconfig.json exists -> TypeScript project*.ts or *.tsx files in source directories -> TypeScript projectIf both Python and TypeScript indicators are found, prefer the one with more signals. If ambiguous, prompt the user:
AskUserQuestion:
questions:
- header: "Project Type"
question: "Both Python and TypeScript project files were detected. Which should be analyzed for coverage?"
options:
- label: "Python"
description: "Analyze Python coverage with pytest-cov"
- label: "TypeScript"
description: "Analyze TypeScript coverage with istanbul/c8"
multiSelect: false
Read the coverage patterns reference for framework-specific detection:
Read: ${CLAUDE_PLUGIN_ROOT}/skills/analyze-coverage/references/coverage-patterns.md
Python: pytest-cov
Check for pytest-cov in the following locations (stop at first match):
pyproject.toml -> look for "pytest-cov" in dependenciesrequirements.txt / requirements-dev.txt -> look for "pytest-cov" linesetup.py / setup.cfg -> look for "pytest-cov" in install_requires or extras_requirepip list 2>/dev/null | grep -i pytest-covTypeScript: istanbul / c8
Check package.json for coverage tool packages:
devDependencies -> look for c8, @vitest/coverage-v8, @vitest/coverage-istanbuldevDependencies -> look for jest (Jest has built-in istanbul coverage)npx c8 --version 2>/dev/nullPython:
pyproject.toml with [tool.pytest.ini_options] -> pytestpytest.ini or conftest.py exists -> pytestTypeScript:
vitest.config.* exists -> Vitestjest.config.* exists -> Jestpackage.json with vitest in devDependencies -> Vitestpackage.json with jest in devDependencies -> JestIdentify the main source package/directory to measure coverage for:
Python:
pyproject.toml [tool.coverage.run] source setting.coveragerc [run] source setting__init__.pytests/, test/, docs/, .venv/, venv/TypeScript:
vitest.config.* or jest.config.* for collectCoverageFrom or coverage.includesrc/ if it existssrc/ does not exist, look for the main directory from package.json main or module fieldLocate test files and directories:
Python:
# Find test directories
find . -type d -name "tests" -o -name "test" | head -10
# Find test files
find . -name "test_*.py" -o -name "*_test.py" | head -20
TypeScript:
# Find test files
find . -name "*.test.ts" -o -name "*.spec.ts" -o -name "*.test.tsx" -o -name "*.spec.tsx" | grep -v node_modules | head -20
If the coverage tool is not detected:
Python -- pytest-cov not found:
Coverage tool not detected.
pytest-cov is not installed. Install it with:
pip install pytest-cov
Or add "pytest-cov" to your dev dependencies in pyproject.toml:
[project.optional-dependencies]
dev = ["pytest-cov"]
TypeScript -- no coverage tool found:
For Vitest projects:
Coverage tool not detected.
Install the Vitest coverage provider:
npm install -D @vitest/coverage-v8
For Jest projects:
Jest includes istanbul coverage by default. No additional installation required.
Run with: npx jest --coverage
After presenting the install command, stop the workflow. Do not attempt to run coverage without the tool installed.
If no test files are found in the project:
No test files detected. Coverage: 0%Use /generate-tests to create an initial test suite based on your source codeIf no source files are found in the project path:
ERROR: No source files found in {project-path}.
The directory exists but contains no files matching supported extensions
(.py, .ts, .tsx, .js, .jsx).
Check that the project path is correct, or specify it explicitly:
/analyze-coverage /path/to/project
Stop the workflow after this error.
Goal: Execute the coverage tool and capture output.
Based on the detected environment, construct the appropriate coverage command.
Python (pytest-cov):
pytest --cov={package} --cov-report=term-missing --cov-report=json --cov-branch
If specific test directories were detected:
pytest {test_dirs} --cov={package} --cov-report=term-missing --cov-report=json --cov-branch
TypeScript (Vitest with coverage):
npx vitest run --coverage --coverage.reporter=json --coverage.reporter=text
TypeScript (Jest):
npx jest --coverage --coverageReporters=json --coverageReporters=text
TypeScript (standalone c8):
npx c8 --reporter=json --reporter=text npx vitest run
Run the coverage command via Bash:
cd {project_path} && {coverage_command} 2>&1
Capture both stdout and stderr for analysis.
If the coverage command exits with a non-zero status:
Check the error output for common issues:
No data was collected -> verify --cov={package} matches the importable package nameModule X was never imported -> check that the package is importablepip install or npm installReport the error with diagnostic info:
Coverage tool failed with exit code {code}.
Error output:
{stderr}
Possible causes:
- {diagnosis based on error pattern}
Suggested fix:
- {actionable fix suggestion}
Goal: Parse the JSON coverage report into structured data.
Python (pytest-cov):
coverage.json in the project root--cov-report=json:{path} if specifiedTypeScript (istanbul/c8):
coverage/coverage-final.jsoncoverage.reportsDirectory in vitest config or coverageDirectory in jest configRead the JSON coverage report file.
Extract from coverage.json:
Per-file data from the files object:
summary.num_statements -- total executable statementssummary.covered_lines -- statements executed during testssummary.missing_lines -- statements not executedsummary.percent_covered -- coverage percentage (0-100)summary.num_branches -- total branches (if branch coverage enabled)summary.covered_branches -- branches taken during testsmissing_lines -- array of line numbers not coveredProject totals from the totals object:
percent_covered -- overall coverage percentagecovered_lines / num_statements -- line coverage ratiocovered_branches / num_branches -- branch coverage ratioExtract from coverage-final.json:
Per-file data (keys are absolute file paths):
covered = Object.values(s).filter(n => n > 0).length, total = Object.keys(s).lengthcovered = sum of b[id].filter(n => n > 0).length, total = sum of b[id].lengthcovered = Object.values(f).filter(n => n > 0).length, total = Object.keys(f).lengthstatementMap where s[id] === 0fnMap where f[id] === 0Project totals: Sum across all files
For readable reporting, group consecutive uncovered line numbers into ranges:
[15, 16, 17, 22, 23] -> lines 15-17, 22-23[5] -> line 5[10, 11, 12, 13, 14, 15] -> lines 10-15Compare coverage against the configured threshold:
Threshold source (in precedence order):
--threshold argument.claude/agent-alchemy.local.md tdd.coverage-thresholdPer-file check: Flag files below the threshold, sorted by coverage (lowest first)
Project-wide check: Compare overall percentage against threshold
Goal: Identify coverage gaps and, if a spec is provided, map coverage against acceptance criteria.
From the parsed coverage data, identify:
For the top uncovered areas, read the source files to understand what the uncovered code does. This enables specific test suggestions rather than generic ones.
For each uncovered function or line range:
If --spec <path> was provided:
Validate spec path: Check the file exists. If not:
WARNING: Spec file not found at {path}. Skipping spec-to-coverage mapping.
Proceeding with coverage-only analysis.
Skip the remainder of this section and proceed to gap suggestions.
Parse the spec: Read the spec file and extract acceptance criteria by category:
_Functional:_ criteria_Edge Cases:_ criteria_Error Handling:_ criteria_Performance:_ criteriaMap criteria to code: For each acceptance criterion:
Check coverage for mapped locations: For each mapped criterion:
Report untested criteria: List acceptance criteria where implementing code has no coverage
For each coverage gap, suggest a specific test to write:
test_{function_name}_returns_expected_output -- test with typical inputstest_{function_name}_when_{condition} -- test that triggers the untaken branchtest_{function_name}_raises_on_{error_condition} -- test that triggers the errortest_{function_name}_handles_{boundary} -- test with boundary inputsGoal: Produce a structured markdown report summarizing coverage results.
## Coverage Analysis Report
**Project**: {project_path}
**Framework**: {pytest-cov | istanbul/c8 | vitest}
**Threshold**: {threshold}%
**Date**: {current date}
### Overall Coverage
| Metric | Value | Status |
|--------|-------|--------|
| Line Coverage | {pct}% ({covered}/{total}) | {PASS or BELOW THRESHOLD} |
| Branch Coverage | {pct}% ({covered}/{total}) | {PASS or BELOW THRESHOLD} |
| Function Coverage | {pct}% ({covered}/{total}) | {PASS or BELOW THRESHOLD or N/A} |
### Coverage by File
| File | Lines | Branches | Status |
|------|-------|----------|--------|
| {file_path} | {pct}% | {pct}% | {PASS or BELOW} |
| ... | ... | ... | ... |
Files are sorted by coverage percentage (lowest first). Only files below threshold are shown by default.
### Coverage Gaps
{For each gap, ordered by priority (P0 first):}
**{P0|P1|P2|P3}**: `{file_path}:{function_name}` ({pct}% covered)
- Uncovered lines: {line_ranges}
- Description: {what the uncovered code does}
- Suggested test: `{test_name}` -- {brief description of what to test}
{If spec provided:}
### Spec Coverage Mapping
**Spec**: {spec_path}
| Category | Criterion | Status | Code Location |
|----------|-----------|--------|---------------|
| Functional | {criterion text} | TESTED/PARTIAL/UNTESTED | {file:function} |
| Edge Cases | {criterion text} | TESTED/PARTIAL/UNTESTED | {file:function} |
| Error Handling | {criterion text} | TESTED/PARTIAL/UNTESTED | {file:function} |
**Tested**: {n}/{total} criteria
**Partially Tested**: {n}/{total} criteria
**Untested**: {n}/{total} criteria
### Summary
- Overall coverage: {pct}% ({PASS or BELOW threshold of {threshold}%})
- Files below threshold: {count}
- Critical gaps (P0): {count}
- High priority gaps (P1): {count}
{If spec:}
- Untested acceptance criteria: {count}
Goal: Provide actionable next steps based on the coverage analysis.
Based on the coverage analysis results, recommend specific actions:
If coverage is below threshold:
### Next Steps
1. **Generate tests for critical gaps**: Run `/generate-tests {source_file}` to auto-generate tests for uncovered areas
2. **Focus on P0 gaps first**: The following files have 0% coverage and need tests immediately:
{list of P0 files}
3. **Run TDD cycle for new features**: Use `/tdd-cycle {feature}` to build new features test-first
If coverage is above threshold:
### Next Steps
1. **Coverage target met** -- {pct}% exceeds the {threshold}% threshold
2. **Optional improvements**: Consider adding tests for the remaining {count} uncovered branches
3. **Maintain coverage**: Run `/analyze-coverage` after significant changes to catch regressions
If spec mapping found untested criteria:
### Untested Acceptance Criteria
The following acceptance criteria from {spec_path} have no test coverage:
{For each untested criterion:}
- **{criterion}**: Generate tests with `/generate-tests {spec_path}`
Recommend other TDD tools skills when appropriate:
/generate-tests {spec_path} to generate criteria-driven tests/generate-tests {source_file} to generate code-analysis tests/tdd-cycle {feature} to follow RED-GREEN-REFACTOR/analyze-coverage {project_path} to verify improvementsDetect the missing tool, identify the project type, and suggest the appropriate installation command. Do not attempt to run coverage without the tool installed.
If the coverage command returns a non-zero exit code:
references/coverage-patterns.md Common Issues tables)If --spec points to a file that does not exist:
Spec file not found at {path}. Skipping spec-to-coverage mapping.If the project path contains no recognizable source files:
If the JSON report file is not generated after running the coverage command:
The following settings in .claude/agent-alchemy.local.md affect coverage analysis:
tdd:
framework: auto # auto | pytest | jest | vitest
coverage-threshold: 80 # Minimum coverage percentage (0-100)
| Setting | Default | Used In |
|---|---|---|
tdd.framework | auto | Phase 1 (coverage tool selection) |
tdd.coverage-threshold | 80 | Phase 3 (threshold comparison) |
The --threshold argument overrides the tdd.coverage-threshold setting.
Error handling:
tdd.framework is set to an unrecognized value, fall back to auto-detection.tdd.coverage-threshold is out of range, clamp to 0-100.See tdd-tools/README.md for the full settings reference.
references/coverage-patterns.md -- Framework-specific coverage tool integration: detection, commands, JSON parsing, gap analysis, spec mappingActivates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.