npx claudepluginhub mayurpise/draft --plugin draftThis skill uses the workspace's default tool permissions.
You are computing and reporting code coverage for the active track or a specific module. This complements the TDD workflow — TDD is the process (write test, implement, refactor), coverage is the measurement (how much code do those tests exercise).
Queries test coverage in Python, Node.js, Rust, Go projects. Identifies uncovered areas/files, analyzes trends, and generates reports before changes or PRs.
Analyzes test coverage reports (lcov, cobertura, istanbul) to identify gaps in lines/branches/functions, map to requirements, recommend tests, and track trends.
Runs coverage tools like pytest-cov and istanbul/c8 via Bash to analyze test coverage, identify gaps, and provide actionable test recommendations.
Share bugs, ideas, or general feedback.
You are computing and reporting code coverage for the active track or a specific module. This complements the TDD workflow — TDD is the process (write test, implement, refactor), coverage is the measurement (how much code do those tests exercise).
/draft:implement)draft/tech-stack.md for test framework and language infodraft/tracks.mdarchitecture.md (track-level) or project has .ai-context.md, identify current module for scopingcoverage_target in draft/workflow.md. Check for per-module targets first (see Per-Module Coverage Enforcement below); if absent, default to 95%.draft/tracks/<id>/bughunt-report-latest.md (track scope) or draft/bughunt-report-latest.md (project scope) exists for cross-referencing (see Coverage-Bughunt Cross-Reference below)If no active track and no argument provided:
/draft:new-track first."Auto-detect from tech stack:
| Language | Coverage Tools |
|---|---|
| JavaScript/TypeScript | jest --coverage, vitest --coverage, c8, nyc |
| Python | pytest --cov, coverage run, coverage.py |
| Go | go test -coverprofile=coverage.out |
| Rust | cargo tarpaulin, cargo llvm-cov |
| C/C++ | gcov, lcov |
| Java/Kotlin | jacoco, gradle jacocoTestReport |
| Ruby | simplecov |
Detection order:
tech-stack.md for explicit testing sectionjest.config.*, vitest.config.*, pytest.ini, setup.cfg, pyproject.toml, .nycrc)package.json scripts for coverage commandsPriority order:
architecture.md (or project has .ai-context.md) with an in-progress module: scope to that module's filesgit diff against base branch)Build the coverage command with the appropriate scope/filter flags.
--json for Jest, --cov-report=json for pytest, -coverprofile for Go, --coverage-output-format json for dotnet.Parse coverage output and present in a standardized format:
═══════════════════════════════════════════════════════════
COVERAGE REPORT
═══════════════════════════════════════════════════════════
Track: [track-id]
Module: [module name, if applicable]
Target: [from workflow.md, default 95%]
SUMMARY
─────────────────────────────────────────────────────────
Overall: 87.3% (target: 95%) ← BELOW TARGET
PER-FILE BREAKDOWN
─────────────────────────────────────────────────────────
src/auth/middleware.ts 96.2% PASS
src/auth/jwt.ts 72.1% FAIL
src/auth/types.ts 100.0% PASS
UNCOVERED LINES
─────────────────────────────────────────────────────────
src/auth/jwt.ts:45-52 Error handler for malformed token
src/auth/jwt.ts:78 Defensive null check (unreachable via public API)
═══════════════════════════════════════════════════════════
If the project's test framework supports branch/condition coverage (e.g., Istanbul, coverage.py branch mode), execute this step. Otherwise skip to Step 7.
Beyond line coverage, evaluate branch coverage for modules with complex conditional logic:
--coverage --coverageReporters=json-summary (branch data included by default)--cov --cov-branchgo test -covermode=count (counts execution per branch)--rc lcov_branch_coverage=1Include branch coverage percentage in the report alongside line coverage when branch analysis is performed.
For files below target (using per-module targets when configured — see Per-Module Coverage Enforcement):
SUGGESTED TESTS
─────────────────────────────────────────────────────────
1. Test malformed JWT token handling (jwt.ts:45-52)
- Input: token with invalid signature
- Expected: throws AuthError with code INVALID_TOKEN
2. Test expired token rejection (jwt.ts:60-65)
- Input: token with exp in the past
- Expected: throws AuthError with code TOKEN_EXPIRED
When encountering modules with 0% or very low coverage that need refactoring, do not attempt to write unit tests for untested legacy code directly. Instead, apply the Golden Master / Approval Testing approach (ref: Michael Feathers, "Working Effectively with Legacy Code"):
Tool references:
Present characterization testing recommendations in the gap analysis when applicable.
After measuring line coverage (and branch coverage if applicable), prompt the engineer to consider mutation testing for critical modules. Mutation testing introduces small code changes (mutants) into the source; if existing tests still pass, the mutant "survived," indicating weak test assertions even at high line coverage.
When to recommend: Modules at 90%+ line coverage that are high-risk (auth, payments, crypto, data persistence) or where past bugs have occurred. Mutation testing is most valuable when line coverage is already high but test quality is uncertain.
Mutation score = killed mutants / total non-equivalent mutants. Target: 80%+ for critical modules.
Tool recommendations by language:
| Language | Tool | Reference |
|---|---|---|
| Java | PIT | https://pitest.org/ |
| JavaScript/TypeScript | Stryker | https://stryker-mutator.io/ |
| Python | mutmut | https://github.com/boxed/mutmut |
| Rust | cargo-mutants | https://github.com/sourcefrog/cargo-mutants |
| C# | Stryker.NET | https://stryker-mutator.io/ |
| Go | go-mutesting | https://github.com/zimmski/go-mutesting |
Reference: Google's mutation testing program is used by 6,000+ engineers and processes approximately 30% of all code diffs, validating that mutation testing scales to large codebases.
Include mutation testing recommendations in the report when applicable, but do not block coverage completion on mutation analysis — it is advisory.
If a bughunt report exists (draft/tracks/<id>/bughunt-report-latest.md or draft/bughunt-report-latest.md):
BUGHUNT CROSS-REFERENCE
─────────────────────────────────────────────────────────
⚠ CRITICAL: Bug "Race condition in session refresh" (bughunt #3)
at src/auth/session.ts:112-118 — IN UNCOVERED CODE
→ Write a test that exposes this bug FIRST before fixing
⚠ HIGH: Bug "Missing null check on user lookup" (bughunt #7)
at src/users/repository.ts:45 — IN UNCOVERED CODE
→ Write a regression test targeting this path
Instead of applying a single global coverage target, support differentiated targets by module risk level. Check draft/workflow.md for a coverage_targets section:
# Example workflow.md configuration
coverage_targets:
high_risk: 95 # auth, payments, crypto, data persistence
business_logic: 85
infrastructure: 70
generated: exclude
modules:
src/auth/: high_risk
src/payments/: high_risk
src/crypto/: high_risk
src/db/: high_risk
src/api/handlers/: business_logic
src/utils/: infrastructure
src/generated/: generated
If no per-module configuration exists, apply these defaults and inform the developer:
| Risk Level | Target | Applies To |
|---|---|---|
| High-risk | 95%+ | Auth, payments, crypto, data persistence modules |
| Business logic | 85%+ | Core domain logic, API handlers |
| Infrastructure | 70%+ | Utilities, glue code, configuration |
| Generated | Exclude | Auto-generated code, proto stubs, ORM models |
Classification heuristic: Infer module risk from directory names and file content when explicit configuration is absent. Flag the inferred classification in the report so the developer can correct it.
In the coverage report, show per-module targets alongside actual coverage:
PER-FILE BREAKDOWN (module-level targets)
─────────────────────────────────────────────────────────
src/auth/middleware.ts 96.2% [high_risk: 95%] PASS
src/auth/jwt.ts 72.1% [high_risk: 95%] FAIL
src/utils/logger.ts 75.0% [infrastructure: 70%] PASS
src/generated/api.ts — [generated: excluded]
STOP. Present the full coverage report and gap analysis.
Ask developer:
Wait for developer approval before recording results.
After developer approves:
Update plan.md - Add coverage note to the relevant phase:
**Coverage:** 96.2% (target: 95%) - PASS
- Uncovered: defensive null checks in jwt.ts (justified)
Update architecture context — update the project-level draft/architecture.md with coverage data (not a track-level architecture file), then run the Condensation Subroutine (defined in core/shared/condensation.md) to regenerate draft/.ai-context.md. The Condensation Subroutine only applies to the project-level draft/architecture.md → draft/.ai-context.md pipeline:
- **Status:** [x] Complete (Coverage: 96.2%)
Update metadata.json - Add coverage field if not present:
{
"coverage": {
"overall": 96.2,
"target": 95,
"timestamp": "2025-01-15T10:30:00Z"
}
}
Write detailed coverage report to draft/tracks/<id>/coverage-report-<timestamp>.md (where <timestamp> is generated via date +%Y-%m-%dT%H%M, e.g., 2026-03-15T1430) with YAML frontmatter (include project, track_id, generated_by: "draft:coverage", generated_at, git metadata matching other skills) and timestamped entries for historical tracking.
After writing the timestamped report, create a symlink pointing to it:
ln -sf coverage-report-<timestamp>.md draft/tracks/<id>/coverage-report-latest.md
Previous timestamped reports are preserved. The -latest.md symlink always points to the most recent report.
Announce:
Coverage report complete.
Overall: [percentage]% (target: [target]%)
Status: [PASS / BELOW TARGET]
Files analyzed: [count]
Gaps documented: [count testable] testable, [count justified] justified
Report: draft/tracks/<id>/coverage-report-<timestamp>.md (symlink: coverage-report-latest.md)
Results recorded in:
- plan.md (phase notes)
- architecture.md → .ai-context.md (module status, via Condensation Subroutine) [if applicable]
- metadata.json (coverage data)
When coverage is run again on the same track/module: