Skill

gap-analysis

Test gap analysis methodology for finding missing tests across six categories (unit, integration, e2e, UI, contract, security). Use when conducting a gap analysis, auditing test coverage, identifying untested code, inventorying tests, or when /test-driver:analyze is invoked. Provides the step-by-step process for detecting project type, loading stack profiles, and producing a prioritized gap report.

From test-driver
Install
1
Run in your terminal
$
npx claudepluginhub l3digitalnet/claude-code-plugins --plugin test-driver
Tool Access

This skill uses the workspace's default tool permissions.

Skill Content

Gap Analysis: Finding Missing Tests

A systematic methodology for identifying which source files and functions lack adequate test coverage. This skill produces a structured gap report that the convergence-loop skill consumes.

Step 1: Detect Project Type

Scan the project root for marker files to determine which stack profile to load:

MarkerProfile
pyproject.toml with fastapi or starlette in dependenciespython-fastapi
pyproject.toml with PySide6 or PyQt6 in dependenciespython-pyside6
pyproject.toml with django in dependenciespython-django
manifest.json with "domain" key + custom_components/ directoryhome-assistant
Package.swift or *.xcodeprojswift-swiftui
pyproject.toml with no framework matchGeneric Python (use pytest conventions)
No marker foundAsk user which profile to use

No Profile Match

When detection fails or the matching profile skill doesn't exist:

"No stack profile matches this project. I can run a basic gap analysis using generic conventions, but results will be more accurate with a dedicated profile. Want me to create a profiles/<stack-name>/SKILL.md for this project type?"

If the user agrees: inspect the project's test toolchain (test runner, coverage tool, directory conventions, UI framework), draft a profile answering the five standard questions, write it to skills/profiles/<stack-name>/SKILL.md, and proceed.

Partial Match

If a profile exists but doesn't cover something the project uses (e.g., the python-fastapi profile is loaded but the project also has Playwright browser tests), suggest updating the profile to include the missing tooling.

Multi-Framework Projects

Load one primary profile based on the dominant framework. If the project combines stacks (e.g., Django backend + PySide6 management tool), the primary profile covers the main application. For secondary components, consult additional profiles when running scoped analysis on those directories.

Step 2: Determine Applicable Test Categories

Load the categories from the matched stack profile. The six categories:

CategoryScopeApplicability
UnitSingle function/class, mocked dependenciesAlways applicable
IntegrationMultiple components, real dependenciesAlways applicable
E2EFull system flow, entry to exitProfile-determined
UIVisual interaction (Qt Pilot, Charlotte, XCUITest)Profile-determined
ContractAPI schema validation, request/response shapesProfile-determined
SecurityInjection, auth bypass, secrets exposureProfile-determined

Only analyze categories that the profile marks as applicable.

Step 3: Inventory Existing Tests

Use Glob to find all test files based on the profile's discovery conventions:

  • Match test file patterns (e.g., test_*.py, *Tests.swift)
  • Categorize each test file by type based on directory structure (tests/unit/, tests/integration/) or pytest markers. This classification feeds Step 5's per-category coverage mapping.
  • Count tests per category

Read test files in parallel batches (opus-context alignment). For files under 4000 lines, read them fully.

Step 4: Inventory Source Files

Find all non-test source files. Exclude common non-source patterns:

  • __pycache__/, .pyc files
  • Migration files (migrations/, alembic/)
  • Generated files (moc_*, ui_*, *.generated.*)
  • Configuration files (*.toml, *.yml, *.json unless they contain logic)
  • Build artifacts (build/, dist/, .build/)
  • Virtual environments (venv/, .venv/, env/)

Step 5: Map Coverage Per Category

For each source file and each applicable category (from the profile), determine whether test coverage exists in that specific category.

Phase 1: Classify Existing Tests

Use the categorization from Step 3. Classification priority:

  1. Directory structure: Test files under tests/unit/, tests/integration/, tests/e2e/, tests/contract/, tests/security/, tests/ui/ are classified by their directory.
  2. Pytest markers: Test files using @pytest.mark.unit, @pytest.mark.integration, etc. are classified by their markers. A file can belong to multiple categories if it has multiple markers.
  3. Conservative fallback: Test files that have neither a category directory nor markers are classified as unit. This intentionally over-reports gaps for non-unit categories; under-reporting is the problem this methodology exists to solve.

Phase 2: Per-Source-File, Per-Category Mapping

For each source file, for each applicable category:

  • Is there a test file classified in that category (from Phase 1) that imports or references this source file?
  • Use the same structural mapping techniques (import scanning, naming conventions, content grep) but scoped to the test files in that specific category.

A source file that has unit tests but no integration tests still has an integration gap. A source file with no tests in any category has gaps in every applicable category.

This is structural mapping (test file exists in the right category and references the source), not runtime coverage. Runtime coverage requires executing the test suite, which happens during the convergence loop.

Step 6: Identify and Prioritize Gaps

For each source file missing test coverage in an applicable category, create a gap entry. Prioritize by:

  1. Public API surface (highest) — exported functions, API endpoints, public class methods
  2. Complex logic — functions with many branches, deep nesting, or multiple return paths
  3. Recently changed — files modified in recent commits are more likely to have regressions
  4. Error handling paths — exception handlers, error returns, validation logic

Step 7: Gap Report Output

Produce a structured report that the convergence-loop skill can consume:

## Gap Analysis Report

**Project:** <project-name>
**Profile:** <stack-profile>
**Date:** <ISO-8601 timestamp>
**Source files analyzed:** <count>

### Gaps Found: <total-count>

| Priority | File | Category | Description |
|----------|------|----------|-------------|
| high | src/api/auth.py | unit | No unit tests for token validation functions |
| high | src/api/auth.py | integration | No integration test for token refresh with expired session |
| medium | src/services/email.py | unit | Email template rendering has no tests |
| low | src/utils/formatting.py | unit | String formatting helpers untested (low complexity) |

### Category Summary

| Category | Applicable | Existing Tests | Gaps |
|----------|-----------|----------------|------|
| unit | yes | 38 | 3 |
| integration | yes | 12 | 1 |
| e2e | yes | 4 | 0 |
| contract | yes | 0 | 0 |

This report feeds directly into the convergence-loop skill, which generates tests for the highest-priority gaps first.

Stats
Parent Repo Stars3
Parent Repo Forks0
Last CommitMar 16, 2026