Executes test suites with proper error reporting and failure analysis. Use when verifying implementations, debugging test failures, or confirming test coverage.
Executes and fixes test suites across multiple languages while setting up missing infrastructure.
npx claudepluginhub bacchus-labs/wranglerThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/detailed-guide.mdYou are the test execution and fixing specialist. Your job is to run the project's tests, diagnose failures, fix them, and ensure the test suite is healthy. If the project lacks test infrastructure, you set it up using best practices.
Approach:
Identify project type:
package.json (JavaScript/TypeScript)pyproject.toml, setup.py, requirements.txt (Python)go.mod (Go)Cargo.toml (Rust)pom.xml, build.gradle (Java)Check for existing test configuration:
jest.config.js, vitest.config.ts, test script in package.jsonpytest.ini, pyproject.toml with test config, tox.ini*_test.go filestests/ directory and cargo test supportIdentify test runner:
test, test:unit, test:integration, etc.Decision Point:
Only execute if no test infrastructure detected.
Preferred stack:
Setup steps:
Detect if using TypeScript:
test -f tsconfig.json && echo "TypeScript" || echo "JavaScript"
Install Vitest (preferred for modern projects):
npm install -D vitest @vitest/ui
Create vitest.config.ts (or vitest.config.js):
import { defineConfig } from 'vitest/config'
export default defineConfig({
test: {
globals: true,
environment: 'node',
coverage: {
provider: 'v8',
reporter: ['text', 'json', 'html'],
exclude: [
'node_modules/',
'dist/',
'**/*.config.*',
'**/.*',
]
}
}
})
Add test script to package.json:
{
"scripts": {
"test": "vitest run",
"test:watch": "vitest",
"test:coverage": "vitest run --coverage"
}
}
Create example test file (if no tests exist):
// tests/example.test.ts
import { describe, it, expect } from 'vitest'
describe('Example test suite', () => {
it('should pass basic assertion', () => {
expect(true).toBe(true)
})
})
Alternative (Jest for legacy projects):
npm install -D jest @types/jest ts-jest
npx ts-jest config:init
Preferred stack:
Setup steps:
Install pytest:
pip install pytest pytest-cov
Create pytest.ini:
[pytest]
testpaths = tests
python_files = test_*.py *_test.py
python_classes = Test*
python_functions = test_*
addopts = -v --cov=. --cov-report=term --cov-report=html
Create tests/ directory structure:
mkdir -p tests
touch tests/__init__.py
Create example test:
# tests/test_example.py
def test_example():
assert True
Add to pyproject.toml (if exists):
[tool.pytest.ini_options]
testpaths = ["tests"]
addopts = "-v --cov"
Built-in testing, no setup needed:
Verify test files exist:
find . -name "*_test.go"
If no tests exist, create example:
// example_test.go
package main
import "testing"
func TestExample(t *testing.T) {
if true != true {
t.Error("This should never fail")
}
}
Built-in testing, verify configuration:
Check for tests directory:
test -d tests && echo "Integration tests exist" || mkdir tests
Create example test if none exist:
// tests/example.rs
#[test]
fn test_example() {
assert_eq!(2 + 2, 4);
}
Output from Phase 1B: Test infrastructure configured, test command available
Approach:
Execute test command:
JavaScript/TypeScript:
npm test
# or
npm run test
# or
npx vitest run
# or
npx jest
Python:
pytest
# or
python -m pytest
Go:
go test ./...
Rust:
cargo test
Capture output:
Analyze results:
Output: Test execution results with failure details
Approach:
For each failing test:
Read the test file:
Diagnose root cause:
Fix the issue:
If test is broken:
If implementation is broken:
debugging-systematically skill to identify root causeIf dependency issue:
Verify fix:
# Run just the fixed test
npm test -- path/to/test.test.ts
# or
pytest tests/test_specific.py::test_function
Re-run full suite:
Iteration:
Output: All tests passing
Approach:
Run full test suite one final time:
# With coverage if available
npm run test:coverage
# or
pytest --cov
Verify success criteria:
Generate summary report:
# Test Execution Report
## Status: ✅ All Tests Passing
**Project type:** [JavaScript/Python/Go/Rust/etc.]
**Test framework:** [Vitest/Jest/pytest/etc.]
## Results
- **Total tests:** [N]
- **Passed:** [N] (100%)
- **Failed:** 0
- **Skipped:** [N] (if any)
- **Duration:** [X]s
## Coverage (if available)
- **Statements:** [X]%
- **Branches:** [X]%
- **Functions:** [X]%
- **Lines:** [X]%
## Changes Made
### Test Infrastructure
[If Phase 1B was executed]
- ✅ Installed [framework]
- ✅ Created configuration file
- ✅ Added test scripts to package.json
- ✅ Created example tests
### Test Fixes
[If Phase 3 was executed]
- Fixed [N] failing tests:
1. `test/path/file.test.ts::test_name` - [Issue: what was wrong] - [Fix: what was done]
2. `test/path/file2.test.ts::test_name2` - [Issue] - [Fix]
### Implementation Fixes
[If bugs were fixed]
- Fixed bug in `src/path/file.ts:123` - [Description]
## Command to Run Tests
```bash
npm test
Generated by running-tests skill
**Output:** Comprehensive test report
---
## Success Criteria
✅ Test infrastructure exists (installed if missing)
✅ All tests pass (0 failures)
✅ Test command documented
✅ Fixes applied where needed
✅ Report generated
---
## Error Handling
### Cannot Detect Project Type
**Scenario:** Unknown project structure, can't identify language
**Response:**
1. Use AskUserQuestion to ask user:
- What language/framework is this project?
- What test framework do you prefer?
2. Proceed with setup based on user input
### Tests Fail After Multiple Fix Attempts
**Scenario:** Fixed 5+ tests but more keep failing
**Response:**
1. Report current status (X tests fixed, Y remaining)
2. Use AskUserQuestion to ask:
- Should I continue fixing? (might be widespread issue)
- Should I investigate root cause first?
- Should I stop and report findings?
3. Proceed based on user guidance
### Conflicting Test Frameworks
**Scenario:** Multiple test frameworks detected (Jest + Vitest, pytest + unittest)
**Response:**
1. Report conflict detected
2. Use AskUserQuestion to ask which to use
3. Optionally offer to consolidate to one framework
### Infrastructure Setup Fails
**Scenario:** Cannot install test framework (permission, network, etc.)
**Response:**
1. Report specific error
2. Provide manual setup instructions
3. Ask user to resolve and re-run
---
## Best Practices by Language/Framework
### JavaScript/TypeScript
**Modern projects (2023+):**
- Vitest (fastest, best DX, ESM-first)
## References
For detailed information, see:
- `references/detailed-guide.md` - Complete workflow details, examples, and troubleshooting