Analyze test suites for speed bottlenecks, redundancy, and DRY violations. Use when tests are slow, duplicated, or need refactoring.
/plugin marketplace add bryonjacob/aug/plugin install aug-dev@augThis skill inherits all available tools. When active, it can use any tool Claude has access to.
You analyze and optimize test suites for speed, maintainability, and clarity.
| Cause | Symptom | Solution |
|---|---|---|
| Database calls | >50ms, I/O wait | Mock with fixtures |
| External API calls | >100ms, network | Mock responses |
| File I/O | >20ms per operation | In-memory or fixtures |
| Complex computation | CPU-bound | Cache or simplify |
| Sleep/wait | Explicit delays | Mock time |
# Python
pytest -v --durations=0 -m "not integration"
# JavaScript
vitest run --reporter=verbose
# Just command (if available)
just slowtests 50
Tests with >70% similar assertions:
# Redundant: Test B is subset of Test A
# Test A: validates email with 5 formats
# Test B: validates email with 3 formats (subset of A)
# -> Test B is subsumed by Test A
Integration test covers unit test cases:
# Integration test: POST /users validates email
# Unit test: validate_email() checks formats
# -> If integration covers all formats, unit may be redundant
70% code similarity between tests
Repeated object creation:
# Before: Repeated in each test
def test_user_valid():
user = User(name="Test", email="test@test.com")
...
def test_user_invalid():
user = User(name="Test", email="invalid")
...
# After: Fixture
@pytest.fixture
def base_user():
return User(name="Test", email="test@test.com")
Similar tests with different inputs:
# Before: Multiple tests
def test_email_valid():
assert validate("test@test.com")
def test_email_invalid():
assert not validate("invalid")
# After: Parameterized
@pytest.mark.parametrize("email,expected", [
("test@test.com", True),
("invalid", False),
])
def test_email_validation(email, expected):
assert validate(email) == expected
Repeated assertion patterns:
# Before: Repeated assertions
def test_a():
assert result.status == "success"
assert result.data is not None
assert result.errors == []
# After: Helper
def assert_success(result):
assert result.status == "success"
assert result.data is not None
assert result.errors == []
mock_opportunities - External calls to mockin_memory_db - Use SQLite in-memoryfixture_optimization - Reduce setup timecomputation_simplification - Algorithm improvementssemantic_overlap - Similar test coveragelogical_subsumption - Integration covers unitcopy_paste - Duplicated test codefixture_candidates - Repeated setupparameterize_candidates - Similar tests, different inputshelper_candidates - Repeated assertionsImpact Score (0-10):
(current_ms - target_ms) / current_ms * 10lines_saved / 10tests_removed * 2Complexity Score (0-10):
Priority: impact / (complexity + 1)
Test Optimization Analysis
==========================
Module: {path}
Tests: {count} | Duration: {total}ms | Avg: {avg}ms
Speed Issues ({count}):
- test_foo: 245ms (database calls) -> mock fixtures
- test_bar: 180ms (external API) -> mock responses
Redundancy Issues ({count}):
- test_email_valid subsumed by test_user_create
- test_a and test_b: 85% similar assertions
DRY Opportunities ({count}):
- Parameterize: test_format_* (5 tests -> 1)
- Fixture: user setup in 8 tests
- Helper: success assertion in 12 tests
Priority Order:
1. [High] Mock database in test_validation.py
2. [Medium] Parameterize format tests
3. [Low] Extract user fixture
When optimizing:
just check-all after changesThis skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.