Review, refactor, and improve test quality. Use when user says "improve tests", "refactor tests", "test coverage", "combine tests", "table-driven", "parametrize", "test.each", "eliminate test waste", or wants to optimize test structure.
From dev-workflownpx claudepluginhub alexei-led/cc-thingz --plugin dev-workflowThis skill is limited to using the following tools:
SKILL.codex.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Improve test quality: review existing, refactor structure, fill coverage gaps.
Use TodoWrite to track these 6 phases:
$ARGUMENTS:
review → Analyze current tests, identify issuesrefactor → Combine to tabular, remove duplicates, align stylecoverage → Generate tests for uncovered codefull → All of the aboveIf no argument provided, use AskUserQuestion:
| Header | Question | Options |
|---|---|---|
| Mode | What should I focus on? | 1. Review existing - Analyze current tests, identify issues 2. Refactor tests - Combine to tabular, remove duplicates, align style 3. Fill coverage gaps - Generate tests for uncovered code 4. Full improvement - All of the above |
Spawn BOTH agents in a single message:
Task(
subagent_type="Explore",
run_in_background=true,
description="Test structure scan",
prompt="Explore test structure:
1. Find test files: Glob for *_test.go, test_*.py, *.test.ts, *.spec.ts
2. Identify frameworks (testify, pytest, vitest, jest)
3. Find patterns: table-driven, parametrize, test.each usage
4. Locate test helpers and fixtures
5. Check mock patterns (mockery, pytest-mock, vi.mock)
Return: language, framework, patterns found, helper locations"
)
Task(
subagent_type="Explore",
run_in_background=true,
description="Coverage analysis",
prompt="Run coverage and identify gaps:
Go: go test -coverprofile=/tmp/claude/cov.out ./... && go tool cover -func=/tmp/claude/cov.out
Python: pytest --cov=. --cov-report=term-missing
TypeScript: bun test --coverage
EXCLUDE from coverage (don't test these):
- Test files, test helpers, fixtures
- Generated code (*_gen.go, generated/)
- Mock files (mocks/, mock_*.go)
- CLI entrypoints (main.go, cmd/, __main__.py)
- Type definitions only files
Return: overall %, packages below 70%, uncovered business logic functions"
)
Save agent IDs for potential resumption if session is interrupted.
TaskOutput(task_id=<structure_agent_id>, block=true)
TaskOutput(task_id=<coverage_agent_id>, block=true)
Merge findings to inform next phase.
Based on detected language, spawn ONE appropriate agent:
Task(
subagent_type="go-tests",
description="Go test review",
prompt="Review Go tests for quality issues.
FOCUS ON:
- Tests that should be table-driven (combine similar)
- Pointless tests (trivial getters, constructors)
- Duplicate tests (same scenario multiple ways)
- Mock patterns (prefer mockery --with-expecter)
- Setup duplication (extract helpers)
Return structured findings with file:line references."
)
Task(
subagent_type="py-tests",
description="Python test review",
prompt="Review Python tests for quality issues.
FOCUS ON:
- Tests that should use @pytest.mark.parametrize
- Pointless tests (trivial behavior)
- Duplicate tests
- Mock patterns (pytest-mock with spec=)
- Fixture reuse opportunities
Return structured findings with file:line references."
)
Task(
subagent_type="ts-tests",
description="TypeScript test review",
prompt="Review TypeScript tests for quality issues.
FOCUS ON:
- Tests that should use test.each
- Pointless tests (prop renders, default state)
- Duplicate tests
- Mock patterns (vi.fn, vi.mock, vi.mocked)
- React Testing Library best practices
Return structured findings with file:line references."
)
Based on agent findings, apply changes:
| Language | Pattern |
|---|---|
| Go | Table-driven with t.Run(tc.name, ...) |
| Python | @pytest.mark.parametrize with pytest.param() |
| TypeScript | test.each([{input, expected, desc}]) |
Look for repeated setup (3+ occurrences) → extract to:
testhelper/ or *_test.go helpers with t.Helper()conftest.py fixturestest-utils.ts utilitiesStandardize on one pattern per project:
--with-expecter --inpackagemocker.Mock(spec=Class)vi.mocked() for type safety# Verify all tests pass
go test -v ./...
# or
pytest -v
# or
bun test
TEST IMPROVEMENT COMPLETE
=========================
Mode: [selected mode]
Agent IDs: [list for resumption if needed]
Review:
- Issues found: X
- Issues addressed: Y
Refactoring:
- Tests combined: N → M table-driven
- Duplicates removed: X
- Helpers extracted: Y
Coverage:
- Before: XX%
- After: YY% (excluding non-business code)
Files modified:
- [list]
If no tests exist or the test framework is not configured, report this and ask the user how to proceed rather than creating tests from scratch without guidance.
Execute test improvement now.