From cc-arsenal
Generate comprehensive test suites with coverage analysis and parallel test writing. Automatically activates when users want to write tests, add test coverage, generate test cases, improve testing, or analyze coverage gaps. Supports pytest, vitest, jest, and all major test frameworks.
npx claudepluginhub mgiovani/cc-arsenal --plugin cc-arsenal-teamsThis skill is limited to using the following tools:
Generate comprehensive test suites with coverage gap analysis and parallel test writing, following testing best practices across any project type and framework.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Generate comprehensive test suites with coverage gap analysis and parallel test writing, following testing best practices across any project type and framework.
$ARGUMENTS
CRITICAL: Test generation must be based on ACTUAL code and VERIFIED project patterns:
This skill includes automatic verification before completion:
When attempting to stop working, an automated verification agent runs to ensure quality:
Verification Steps:
Behavior:
Example blocked completion:
Test verification failed:
Tests: FAILED (2 new tests failing)
- test_user_service_create: AttributeError: 'UserService' has no method 'create_user'
- test_parse_config_empty: Expected ValueError, got None
Coverage: IMPROVED (72% → 78%)
Lint: PASSED
Cannot complete until all tests pass. Fix the failing tests.
This skill uses Claude Code's Task Management System to track test generation progress with dependency-aware task tracking.
When to Use Tasks:
When to Skip Tasks:
Step 0.1: Create Task Structure
Before generating tests, create the dependency-aware task structure:
TaskCreate:
subject: "Phase 0: Discover project test workflow"
description: "Identify test framework, coverage tools, and conventions"
activeForm: "Discovering test workflow"
TaskCreate:
subject: "Phase 1: Analyze coverage gaps"
description: "Run coverage, identify untested code, prioritize targets"
activeForm: "Analyzing coverage gaps"
TaskCreate:
subject: "Phase 2: Create test plan"
description: "Present test plan to user for approval"
activeForm: "Creating test plan"
TaskCreate:
subject: "Phase 3: Generate tests in parallel"
description: "Spawn subagents to write tests for each module group"
activeForm: "Generating tests"
TaskCreate:
subject: "Phase 4: Quality verification"
description: "Run all tests, check coverage improvement, lint"
activeForm: "Verifying test quality"
TaskCreate:
subject: "Phase 5: Final commit"
description: "Commit tests with coverage summary"
activeForm: "Committing tests"
# Set up strict sequential chain
TaskUpdate: { taskId: "2", addBlockedBy: ["1"] }
TaskUpdate: { taskId: "3", addBlockedBy: ["2"] }
TaskUpdate: { taskId: "4", addBlockedBy: ["3"] }
TaskUpdate: { taskId: "5", addBlockedBy: ["4"] }
TaskUpdate: { taskId: "6", addBlockedBy: ["5"] }
# Start first task
TaskUpdate: { taskId: "1", status: "in_progress" }
Step 0.2: Discover Test Workflow
Use Haiku-powered Explore agent for token-efficient discovery:
Use Task tool with Explore agent:
- prompt: "Discover the testing workflow for this project:
1. Read CLAUDE.md if it exists - extract testing conventions and commands
2. Check for task runners: Makefile, justfile, package.json scripts, pyproject.toml scripts
3. Identify the test framework:
- Python: pytest, unittest, nose2
- JavaScript/TypeScript: vitest, jest, mocha, playwright, cypress
- Other: go test, cargo test, etc.
4. Identify the test command (e.g., make test, npm test, pytest, bun test)
5. Identify the coverage command (e.g., pytest --cov, vitest --coverage, jest --coverage, make coverage)
6. Identify the lint command
7. Find existing test directory structure and naming conventions
8. Look at 2-3 existing test files to understand:
- Import patterns and test utilities
- Fixture/mock patterns used
- Assertion style (assert, expect, etc.)
- Test organization (describe/it vs test functions)
- Setup/teardown patterns
- Factory or fixture patterns
9. Check for test configuration files:
- pytest.ini, conftest.py, setup.cfg [tool.pytest]
- vitest.config.ts, jest.config.js
- .nycrc, c8 config, istanbul config
10. Note any test-related CI/CD configuration
Return a structured summary of all testing infrastructure."
- subagent_type: "Explore"
- model: "haiku"
Store discovered commands and patterns for use in later phases.
Step 0.3: Complete Phase 0
TaskUpdate: { taskId: "1", status: "completed" }
TaskList # Check that Task 2 is now unblocked
Goal: Identify what code lacks test coverage and prioritize test generation targets.
Step 1.1: Start Phase 1
TaskUpdate: { taskId: "2", status: "in_progress" }
Step 1.2: Establish Coverage Baseline
Run the discovered coverage command to get the current state:
# Examples (use the ACTUAL discovered command):
pytest --cov --cov-report=term-missing
vitest --coverage
jest --coverage
make coverage
Capture the output. If no coverage tooling exists, use an Explore agent to manually identify untested code:
Use Task tool with Explore agent:
- prompt: "Analyze test coverage gaps for this project:
1. List all source files/modules in the project (exclude test files, configs, migrations)
2. List all test files
3. For each source file, check if a corresponding test file exists
4. For files with tests, skim the test file to estimate which functions/methods are tested
5. Identify files with no tests at all
6. Identify complex files (many functions, classes, branching logic) that likely need more tests
Return a structured report:
- Files with NO test coverage (highest priority)
- Files with PARTIAL coverage (functions/methods missing tests)
- Files with GOOD coverage (low priority)
- Overall estimated coverage percentage"
- subagent_type: "Explore"
- model: "haiku"
Step 1.3: Prioritize Test Targets
Rank files/modules for test generation by:
git log --oneline -20 --name-only)If the user specified target files/modules, prioritize those. Otherwise, use the ranking above.
Step 1.4: Complete Phase 1
TaskUpdate: { taskId: "2", status: "completed" }
TaskList # Check that Task 3 is now unblocked
Goal: Present a test plan for user review before generating tests.
Step 2.1: Start Phase 2
TaskUpdate: { taskId: "3", status: "in_progress" }
Step 2.2: Present Test Plan
Use AskUserQuestion to present the plan and get approval:
AskUserQuestion:
question: "Here's the test generation plan based on coverage analysis. Which approach do you prefer?"
header: "Test Plan"
options:
- label: "Full coverage (Recommended)"
description: "Generate tests for all [N] identified gaps: [list of modules]. Estimated [M] test files."
- label: "Critical paths only"
description: "Focus on [top modules] with highest business impact. Estimated [K] test files."
- label: "Specific modules"
description: "Let me specify which modules to test."
The plan should include for each target:
Step 2.3: Complete Phase 2
TaskUpdate: { taskId: "3", status: "completed" }
TaskList # Check that Task 4 is now unblocked
Goal: Generate tests efficiently using parallel subagents, one per module or file group.
Step 3.1: Start Phase 3
TaskUpdate: { taskId: "4", status: "in_progress" }
Step 3.2: Create Parallel Subagent Tasks
Group approved test targets into logical units (by module, feature area, or related files) and create a child task for each:
# Example: 3 module groups to test in parallel
TaskCreate:
subject: "Write tests for auth module"
description: "Generate unit tests for src/auth/ (login, register, token management)"
activeForm: "Writing auth module tests"
metadata: { parent: "4", module: "auth" }
TaskCreate:
subject: "Write tests for user service"
description: "Generate unit tests for src/services/user.py (CRUD, validation)"
activeForm: "Writing user service tests"
metadata: { parent: "4", module: "user-service" }
TaskCreate:
subject: "Write tests for API routes"
description: "Generate integration tests for src/routes/ (endpoints, middleware)"
activeForm: "Writing API route tests"
metadata: { parent: "4", module: "api-routes" }
# Phase 5 (Verification) blocked by ALL parallel tasks
TaskUpdate: { taskId: "5", addBlockedBy: ["child-task-ids"] }
Step 3.3: Spawn Parallel Subagents
For each module group, spawn a Sonnet subagent using the Task tool:
Subagent Instructions Template:
Generate comprehensive tests for [MODULE/FILES].
IMPORTANT: Read these source files FIRST to understand the actual code:
[LIST OF SOURCE FILES TO READ]
Then read these existing test files for patterns to follow:
[LIST OF EXISTING TEST FILES]
Project testing conventions (discovered in Phase 0):
- Test framework: [FRAMEWORK]
- Test command: [COMMAND]
- Test file naming: [PATTERN e.g., test_*.py, *.test.ts, *.spec.js]
- Test directory: [PATH]
- Fixture patterns: [DESCRIBE]
- Mock patterns: [DESCRIBE]
- Assertion style: [DESCRIBE]
Requirements:
1. Follow the EXACT test patterns from existing test files
2. Use the same import patterns, fixtures, and assertion style
3. Test coverage targets:
- All public functions and methods
- Happy path (normal inputs and expected outputs)
- Edge cases (empty inputs, boundary values, null/undefined)
- Error paths (invalid inputs, exceptions, error handling)
- Branch coverage (if/else, switch, ternary paths)
4. Use descriptive test names that explain WHAT is being tested and EXPECTED behavior
5. Keep tests independent - no shared mutable state between tests
6. Mock external dependencies (databases, APIs, file system) appropriately
7. Do NOT test private/internal implementation details
8. Do NOT add unnecessary comments - test names should be self-documenting
After writing tests:
1. Run the test command to verify ALL tests pass
2. Fix any failures before reporting completion
3. Report: files created, test count, what is covered
Do NOT commit - the main agent handles commits.
Model Selection:
Parallelization Strategy:
Step 3.4: Review Subagent Output
After each subagent completes:
TaskUpdate: { taskId: "child-id", status: "completed" }Step 3.5: Complete Phase 3
# After all subagent tasks complete
TaskUpdate: { taskId: "4", status: "completed" }
TaskList # Verify Phase 5 is now unblocked
Goal: Verify all tests pass together, coverage improved, and no regressions.
Step 4.1: Start Phase 4
TaskUpdate: { taskId: "5", status: "in_progress" }
Step 4.2: Run Full Test Suite
Execute the discovered test command to run ALL tests (new and existing):
# Use the ACTUAL discovered command, e.g.:
make test
pytest
npm test
bun test
ALL tests must pass. If any test fails:
Step 4.3: Verify Coverage Improvement
Run the coverage command and compare against the Phase 1 baseline:
# Use the ACTUAL discovered coverage command, e.g.:
pytest --cov --cov-report=term-missing
vitest --coverage
jest --coverage
make coverage
Record:
Step 4.4: Lint Test Files
Run the discovered lint command on test files:
# Use the ACTUAL discovered lint command
make lint
ruff check tests/
npm run lint
Fix any linting errors in the generated test files.
Step 4.5: Complete Phase 4
TaskUpdate: { taskId: "5", status: "completed" }
TaskList # Check that Task 6 is now unblocked
Step 5.1: Start Phase 5
TaskUpdate: { taskId: "6", status: "in_progress" }
Step 5.2: Create Commit
If /cc-arsenal:git:commit skill is available, use it. Otherwise, create a conventional commit manually:
git add [test files created/modified]
git commit -m "test: add comprehensive tests for [modules]
- [N] test files, [M] test cases added
- Coverage: [X]% → [Y]% (+[diff]%)
- Covers: [brief list of modules/features tested]
- Frameworks: [test framework used]"
Step 5.3: Complete Phase 5 and Test Generation
TaskUpdate: { taskId: "6", status: "completed" }
TaskList # Show final status - all tasks should be completed
Provide a summary including:
Generated tests must follow these principles:
For framework-specific test patterns, fixtures, and examples, see: