Automated testing orchestrator that intelligently runs tests based on changed files, generates coverage reports, and can create missing tests.
Runs tests based on changed files, generates coverage reports, and creates missing tests.
/plugin marketplace add rafaelkamimura/claude-tools/plugin install rafaelkamimura-claude-tools@rafaelkamimura/claude-toolsAutomated testing orchestrator that intelligently runs tests based on changed files, generates coverage reports, and can create missing tests.
Use Bash tool to check for test frameworks:
[ -f "package.json" ] && cat package.json | grep -q "jest\|mocha\|vitest\|cypress" && echo "js" || echo ""Use Glob tool to find test configuration files:
pytest.ini or setup.cfg (Python)go.mod (Go)Cargo.toml (Rust)Use Glob tool to locate test files:
**/*.test.* or **/*.spec.* or **/*_test.*Use Bash tool to get modified files:
git diff --name-only HEADUse Bash tool to get staged files:
git diff --staged --name-onlyOutput: "Select test scope:
Choose scope (1-6):"
WAIT for user's choice.
Output: "Enable coverage report? (y/n):" WAIT for user's response.
Output: "Auto-fix simple failures? (y/n):" WAIT for user's response.
Output: "Create missing tests? (y/n):" WAIT for user's response.
Output: "Fail on coverage decrease? (y/n):" WAIT for user's response.
Run tests based on detected framework and user's scope choice.
For JavaScript/TypeScript, use Bash tool:
npm test -- --coverage --watchAll=false (Jest)npm run test -- --coverage --run (Vitest)npm test -- --reporter spec (Mocha)npx cypress run (E2E)For Python, use Bash tool:
pytest --cov=. --cov-report=html --cov-report=termpython -m unittest discoverFor Go, use Bash tool:
go test -v -cover ./...go test -race -coverprofile=coverage.out ./...For Rust, use Bash tool:
cargo test --allcargo tarpaulin --out HtmlParse test output to extract:
If tests failed, use Task tool to launch agents for analysis:
Use Task tool to launch 2 agents IN PARALLEL (single message with 2 Task tool invocations):
Task tool call:
Task tool call:
Wait for both agents to complete.
If user requested missing tests to be created:
Use Task tool to launch 2 agents IN PARALLEL (single message with 2 Task tool invocations):
Task tool call:
Task tool call:
Wait for both agents to complete.
For common test failures, attempt auto-fixes:
If missing tests detected:
Analyze Untested Code
// Example: Function without tests
function calculateDiscount(price, percentage) {
if (percentage < 0 || percentage > 100) {
throw new Error('Invalid percentage');
}
return price * (1 - percentage / 100);
}
Generate Test Cases
// Generated test
describe('calculateDiscount', () => {
test('applies correct discount', () => {
expect(calculateDiscount(100, 20)).toBe(80);
});
test('handles zero discount', () => {
expect(calculateDiscount(100, 0)).toBe(100);
});
test('throws on invalid percentage', () => {
expect(() => calculateDiscount(100, -10)).toThrow();
expect(() => calculateDiscount(100, 110)).toThrow();
});
});
Review Generated Tests
Output: "Review generated tests above. Accept? (y/n/edit):" WAIT for user's response.
Generate Visual Report
## Test Coverage Report
### Summary
- **Statements**: 85.2% (1247/1463)
- **Branches**: 78.4% (421/537)
- **Functions**: 91.3% (189/207)
- **Lines**: 86.1% (1198/1391)
### Coverage Change
📈 +2.3% from previous run
### Uncovered Files
| File | Coverage | Missing Lines |
|------|----------|---------------|
| auth.service.ts | 67% | 45-52, 78-81 |
| payment.processor.ts | 72% | 123-145 |
### Critical Gaps
- Authentication error handling (auth.service.ts:45-52)
- Payment retry logic (payment.processor.ts:123-145)
Coverage Diff
File: src/services/user.service.ts
- Coverage: 78% → 85% (+7%)
+ Lines covered: 45-67 (new)
- Uncovered: 89-92 (error handling)
Check Test Results
quality_gates:
tests_passing: true
coverage_threshold: 80%
no_console_logs: true
no_skip_tests: true
performance_benchmarks: pass
Generate Report
## Test Suite Results
✅ **Passed**: 156/162 tests
❌ **Failed**: 6 tests
⏭️ **Skipped**: 3 tests
### Failed Tests
1. UserService › should handle invalid email
- Expected: ValidationError
- Received: undefined
### Recommendations
- Fix authentication tests before commit
- Add tests for new payment module
- Remove skipped tests or fix them
Decision Point
If tests failed:
Output: "Tests failed. Options: (fix/ignore/debug):" WAIT for user's choice.
If user chooses 'fix': Attempt auto-fix or provide manual fix suggestions
If user chooses 'debug': Suggest running /debug-assistant
If user chooses 'ignore': Ask for reason and document
// Matches: *.test.js, *.spec.ts, *.unit.js
const unitTestPattern = /\.(test|spec|unit)\.(js|ts|jsx|tsx)$/;
// Matches: *.integration.js, *.int.test.js
const integrationPattern = /\.(integration|int\.test)\.(js|ts)$/;
// Matches: *.e2e.js, cypress/*, playwright/*
const e2ePattern = /\.(e2e|cy)\.(js|ts)$|cypress|playwright/;
// Map source files to their tests
const testMapping = {
'src/services/user.service.ts': [
'tests/unit/user.service.test.ts',
'tests/integration/user.api.test.ts'
],
'src/api/auth.controller.ts': [
'tests/unit/auth.controller.test.ts',
'tests/e2e/auth.e2e.ts'
]
};
# Jest
npm test -- -u
# Vitest
npm test -- --update
// Increase timeout for slow tests
jest.setTimeout(10000);
// Update mocked responses
jest.mock('./api', () => ({
fetchUser: jest.fn(() => Promise.resolve(updatedMockData))
}));
/commit/review-code/security-scan{
"framework": "jest",
"coverage": {
"threshold": 80,
"failOnDecrease": true
},
"autoFix": {
"snapshots": true,
"imports": true,
"timeouts": false
},
"testPattern": "**/*.test.ts",
"excludePattern": "**/node_modules/**",
"parallel": true,
"maxWorkers": 4
}
Test Pyramid
Coverage Goals
Test Quality
Performance