From claude-reliability
Guides Rust developers to 100% test coverage by categorizing gaps (unreachable code, dependencies, errors), applying refactor patterns, and using cargo llvm-cov for reports.
npx claudepluginhub drmaciver/claude-reliability --plugin claude-reliabilityThis skill uses the workspace's default tool permissions.
**Coverage isn't about the number - it's about code quality.** Code that's hard to test is usually hard to understand, maintain, and reason about. Pursuing 100% coverage forces you to write better code.
Evaluates test coverage for files, modules, recent changes, or commits and adds high-value tests for real gaps like edge cases, failures, and integrations.
Guides coverage-driven test writing for existing codebases: discovers untested user-facing behavior via coverage reports, writes one meaningful test per iteration, marks low-value code with ignore annotations.
Computes code coverage for active tracks or modules using tools like Jest, Vitest, pytest. Reports percentages, justifies uncovered lines, targets 95%+ in TDD workflows.
Share bugs, ideas, or general feedback.
Coverage isn't about the number - it's about code quality. Code that's hard to test is usually hard to understand, maintain, and reason about. Pursuing 100% coverage forces you to write better code.
Why 100% specifically? Because 100% provides disproportionate value compared to 95%:
Simplicity: A binary state (covered vs. not) provides clear, unambiguous signals. When everything is at 100%, any deviation immediately demands attention. "We have 100% coverage" is a statement you can trust. "We have 95% coverage" always raises the question: which 5% and why?
Trust: Complete coverage eliminates ambiguity. If coverage is always 100%, then any uncovered line is a bug - either in your code or your process. Absence of coverage becomes evidence of a problem, not just noise to filter out. You stop needing to ask "is this uncovered code intentional?" - the answer is always no.
Like Sir Galahad's pure heart granting him the strength of ten, 100% coverage gives you something that 95% coverage cannot: certainty.
Don't suppress coverage. Fix the code.
For each uncovered section, ask: Why isn't this covered?
unreachable!()See references/patterns.md for detailed code examples of each category.
# Generate HTML report
cargo llvm-cov --html
# Fail if under 100%
cargo llvm-cov --fail-under-lines 100
# Show uncovered lines in terminal
cargo llvm-cov --show-missing-lines
Test data should be visually distinctive and obviously different from code, variable names, and system defaults. This makes debugging vastly easier because when a value appears in a log or error message, you can immediately tell where it came from.
name = "user", path = "file", id = 1, count = 0name = "alice-test-user", path = "test-data/waveforms/meow.wav", id = 42, count = 7Generic values like 1 and 0 could be defaults, sentinel values, or array indices. Distinctive values are immediately recognizable as test data.
Test names and docstrings should explain WHY a behavior matters, not just WHAT is being tested. Strip boilerplate phrases like "Test that...", "should...", "correctly", and "properly" — these add no information.
test_parse_input_correctly — "Test that input is parsed correctly"test_parse_input_extracts_config_without_validation — explains the purpose and what would go wrong if it failedWhen reporting on tests, be clear about what they verify. "All tests pass" is less useful than "tests verify X and Y, but do not exercise the path where Z happens." This gives reviewers the information they need to assess whether additional testing is warranted.
A green test suite does not mean nothing broke. Tests only verify what they were written to verify. Remain skeptical even when all tests pass — think about whether a change could affect behavior not covered by any test. Supplement automated checks with active thinking about edge cases, surprising inputs, and interactions.
#[coverage(off)] annotationsFor more detail, see references/patterns.md.