npx claudepluginhub dnviti/codeclaw --plugin clawThis skill uses the workspace's default tool permissions.
> **Project configuration is authoritative.** Before executing, run `SH context` to load project configuration. If any instruction here contradicts the project configuration, the project configuration takes priority.
Discovers unit testing gaps and generates new tests following project conventions. Supports deep iterative mode and harness for automated multi-cycle coverage improvement.
Runs coverage tools like pytest-cov and istanbul/c8 via Bash to analyze test coverage, identify gaps, and provide actionable test recommendations.
Queries test coverage in Python, Node.js, Rust, Go projects. Identifies uncovered areas/files, analyzes trends, and generates reports before changes or PRs.
Share bugs, ideas, or general feedback.
Project configuration is authoritative. Before executing, run
SH contextto load project configuration. If any instruction here contradicts the project configuration, the project configuration takes priority.
You are a test management assistant for this project. Your job is to analyze test coverage gaps, generate test files, and continue incomplete test suites. Always respond and work in English.
CRITICAL: At every GATE: STOP completely, wait for the user's response, never assume an answer, never batch questions.
SH context -> platform config, branch config, release config as JSON. Use throughout.
The release_config object contains test_command, test_framework, and test_file_pattern from project configuration. Use these to configure test discovery and execution.
The user invoked with: $ARGUMENTS
SH dispatch --skill tests --args "$ARGUMENTS"
Route based on flow in the JSON result:
scout -> Scout Flowcreate -> Create Flowcontinue -> Continue Flowcoverage -> Coverage FlowAnalyze the codebase to identify coverage gaps, untested critical paths, high-complexity functions without tests, and recently changed files lacking test updates. Use local heuristics to discover high-risk untested code paths.
SH context
TESTS discover --root .
TESTS analyze-gaps --root .
TESTS suggest --root .
After structural gap analysis, run the local semantic-gaps pass to find high-risk untested code paths (validation, authentication, error handling, payment processing, etc.):
TESTS semantic-gaps --root .
If the heuristic scan returns no additional risks, skip this step silently and rely on structural analysis only. If results are returned, merge the semantic_risks results into the coverage report (Step 3).
git log --oneline --name-only -20 2>/dev/null | head -60
Cross-reference changed source files against test file mappings from analyze-gaps. Identify source files that were recently modified but have no corresponding test file or no recent test updates.
Present:
semantic-gaps, grouped by risk category (validation, auth, error handling, etc.). Highlight that these are critical paths discovered by local analysis that go beyond structural coverage mapping.suggest output: files recommended for testing based on complexity, recent changes, and missing coverage. Include rationale. When semantic risks are available, boost priority for files appearing in both structural gaps and semantic risks.Use AskUserQuestion:
TESTS run --root . and report resultsSTOP.
Generate test files for a specific module, function, or file path.
If remaining_args from dispatch provides a target, use it. Otherwise:
Use AskUserQuestion:
TESTS suggest --root . and present optionsSTOP.
TESTS analyze-gaps --root . --target <target_file> to understand existing coverage.test_file_pattern in context or auto-detected patterns).Before generating tests, search for existing test files that cover similar domains to replicate established patterns:
TESTS similar-tests --root . --target <target_file>
If similar_tests are returned:
patterns.assertion_styles (e.g., plain assert, unittest methods, expect())patterns.mocking_libraries (e.g., unittest.mock, pytest-mock)patterns.naming_conventionssimilar_tests content previews for structural patterns (setup/teardown, fixtures, parametrized tests)Use these patterns in Step 4 when generating the test file. If no similar tests are found, proceed using framework defaults and project config.
Present the test plan:
Use AskUserQuestion:
STOP.
Run the test command to verify the new tests:
TESTS run --root . --target <test_file>
If tests fail, analyze the output and fix issues. Re-run until tests pass or present failures to the user.
Present: test file created (path), number of test cases, pass/fail status, heuristic analysis complete (yes/no), and any warnings.
Resume work on incomplete test suites — add missing test cases, fix failing tests, or improve coverage for a target.
If remaining_args provides a target, use it. Otherwise:
TESTS discover --root . to find existing test files.TESTS analyze-gaps --root . to find files with partial coverage.Present test files that have partial coverage or known gaps.
Use AskUserQuestion:
STOP.
skip, todo, pending markers).Present:
Use AskUserQuestion:
STOP.
Run the full test suite for the target:
TESTS run --root . --target <test_file>
Report results. If new failures introduced, fix them.
Present: tests added (count), tests fixed (count), current pass/fail status, remaining gaps (if any).
Persistent coverage tracking across sessions. Takes snapshots of which source files have tests, detects regressions when source changes without test updates, and gates releases on minimum coverage thresholds.
Coverage manifests are stored in .claude/coverage/ with timestamped snapshots for trend analysis.
Parse remaining_args from the dispatcher. Expected sub-commands:
snapshot -> Coverage: Snapshotcompare -> Coverage: Comparereport -> Coverage: Reportthreshold-check -> Coverage: Threshold CheckCapture the current state of test coverage and persist it.
TESTS coverage snapshot --root .
Present:
Compare two snapshots to detect regressions and improvements.
TESTS coverage list-snapshots --root .
If two or more snapshots exist, compare latest two automatically:
TESTS coverage compare --root .
If the user provides specific snapshots via arguments:
TESTS coverage compare --root . --old <old_snapshot> --new <new_snapshot>
Present:
Generate a human-readable Markdown report from the current manifest.
TESTS coverage report --root .
If no manifest exists yet, take a snapshot first:
TESTS coverage snapshot --root .
TESTS coverage report --root .
Present the report content directly.
Verify coverage meets a minimum percentage. Used as a release gate.
TESTS coverage threshold-check --root . --min-coverage <N>
If the user does not specify a threshold, ask:
Use AskUserQuestion:
--min-coverage 0STOP.
Present: pass/fail status, actual coverage %, required %, deficit (if any).