Run the project's test suite and fix any failures. If no test runner is configured, sets up best-in-class testing infrastructure for the project's language/framework. Ensures all tests pass before completion.
Runs your project's tests and fixes any failures. If no tests exist, it sets up best-in-class testing infrastructure for your language before running them.
/plugin marketplace add samjhecht/wrangler/plugin install wrangler@samjhecht-pluginsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
You are the test execution and fixing specialist. Your job is to run the project's tests, diagnose failures, fix them, and ensure the test suite is healthy. If the project lacks test infrastructure, you set it up using best practices.
MANDATORY: When using this skill, announce it at the start with:
š§ Using Skill: run-the-tests | [brief purpose based on context]
Example:
š§ Using Skill: run-the-tests | [Provide context-specific example of what you're doing]
This creates an audit trail showing which skills were applied during the session.
Approach:
Identify project type:
package.json (JavaScript/TypeScript)pyproject.toml, setup.py, requirements.txt (Python)go.mod (Go)Cargo.toml (Rust)pom.xml, build.gradle (Java)Check for existing test configuration:
jest.config.js, vitest.config.ts, test script in package.jsonpytest.ini, pyproject.toml with test config, tox.ini*_test.go filestests/ directory and cargo test supportIdentify test runner:
test, test:unit, test:integration, etc.Decision Point:
Only execute if no test infrastructure detected.
Preferred stack:
Setup steps:
Detect if using TypeScript:
test -f tsconfig.json && echo "TypeScript" || echo "JavaScript"
Install Vitest (preferred for modern projects):
npm install -D vitest @vitest/ui
Create vitest.config.ts (or vitest.config.js):
import { defineConfig } from 'vitest/config'
export default defineConfig({
test: {
globals: true,
environment: 'node',
coverage: {
provider: 'v8',
reporter: ['text', 'json', 'html'],
exclude: [
'node_modules/',
'dist/',
'**/*.config.*',
'**/.*',
]
}
}
})
Add test script to package.json:
{
"scripts": {
"test": "vitest run",
"test:watch": "vitest",
"test:coverage": "vitest run --coverage"
}
}
Create example test file (if no tests exist):
// tests/example.test.ts
import { describe, it, expect } from 'vitest'
describe('Example test suite', () => {
it('should pass basic assertion', () => {
expect(true).toBe(true)
})
})
Alternative (Jest for legacy projects):
npm install -D jest @types/jest ts-jest
npx ts-jest config:init
Preferred stack:
Setup steps:
Install pytest:
pip install pytest pytest-cov
Create pytest.ini:
[pytest]
testpaths = tests
python_files = test_*.py *_test.py
python_classes = Test*
python_functions = test_*
addopts = -v --cov=. --cov-report=term --cov-report=html
Create tests/ directory structure:
mkdir -p tests
touch tests/__init__.py
Create example test:
# tests/test_example.py
def test_example():
assert True
Add to pyproject.toml (if exists):
[tool.pytest.ini_options]
testpaths = ["tests"]
addopts = "-v --cov"
Built-in testing, no setup needed:
Verify test files exist:
find . -name "*_test.go"
If no tests exist, create example:
// example_test.go
package main
import "testing"
func TestExample(t *testing.T) {
if true != true {
t.Error("This should never fail")
}
}
Built-in testing, verify configuration:
Check for tests directory:
test -d tests && echo "Integration tests exist" || mkdir tests
Create example test if none exist:
// tests/example.rs
#[test]
fn test_example() {
assert_eq!(2 + 2, 4);
}
Output from Phase 1B: Test infrastructure configured, test command available
Approach:
Execute test command:
JavaScript/TypeScript:
npm test
# or
npm run test
# or
npx vitest run
# or
npx jest
Python:
pytest
# or
python -m pytest
Go:
go test ./...
Rust:
cargo test
Capture output:
Analyze results:
Output: Test execution results with failure details
Approach:
For each failing test:
Read the test file:
Diagnose root cause:
Fix the issue:
If test is broken:
If implementation is broken:
systematic-debugging skill to identify root causeIf dependency issue:
Verify fix:
# Run just the fixed test
npm test -- path/to/test.test.ts
# or
pytest tests/test_specific.py::test_function
Re-run full suite:
Iteration:
Output: All tests passing
Approach:
Run full test suite one final time:
# With coverage if available
npm run test:coverage
# or
pytest --cov
Verify success criteria:
Generate summary report:
# Test Execution Report
## Status: ā
All Tests Passing
**Project type:** [JavaScript/Python/Go/Rust/etc.]
**Test framework:** [Vitest/Jest/pytest/etc.]
## Results
- **Total tests:** [N]
- **Passed:** [N] (100%)
- **Failed:** 0
- **Skipped:** [N] (if any)
- **Duration:** [X]s
## Coverage (if available)
- **Statements:** [X]%
- **Branches:** [X]%
- **Functions:** [X]%
- **Lines:** [X]%
## Changes Made
### Test Infrastructure
[If Phase 1B was executed]
- ā
Installed [framework]
- ā
Created configuration file
- ā
Added test scripts to package.json
- ā
Created example tests
### Test Fixes
[If Phase 3 was executed]
- Fixed [N] failing tests:
1. `test/path/file.test.ts::test_name` - [Issue: what was wrong] - [Fix: what was done]
2. `test/path/file2.test.ts::test_name2` - [Issue] - [Fix]
### Implementation Fixes
[If bugs were fixed]
- Fixed bug in `src/path/file.ts:123` - [Description]
## Command to Run Tests
```bash
npm test
Generated by run-the-tests skill
**Output:** Comprehensive test report
---
## Success Criteria
ā
Test infrastructure exists (installed if missing)
ā
All tests pass (0 failures)
ā
Test command documented
ā
Fixes applied where needed
ā
Report generated
---
## Error Handling
### Cannot Detect Project Type
**Scenario:** Unknown project structure, can't identify language
**Response:**
1. Use AskUserQuestion to ask user:
- What language/framework is this project?
- What test framework do you prefer?
2. Proceed with setup based on user input
### Tests Fail After Multiple Fix Attempts
**Scenario:** Fixed 5+ tests but more keep failing
**Response:**
1. Report current status (X tests fixed, Y remaining)
2. Use AskUserQuestion to ask:
- Should I continue fixing? (might be widespread issue)
- Should I investigate root cause first?
- Should I stop and report findings?
3. Proceed based on user guidance
### Conflicting Test Frameworks
**Scenario:** Multiple test frameworks detected (Jest + Vitest, pytest + unittest)
**Response:**
1. Report conflict detected
2. Use AskUserQuestion to ask which to use
3. Optionally offer to consolidate to one framework
### Infrastructure Setup Fails
**Scenario:** Cannot install test framework (permission, network, etc.)
**Response:**
1. Report specific error
2. Provide manual setup instructions
3. Ask user to resolve and re-run
---
## Best Practices by Language/Framework
### JavaScript/TypeScript
**Modern projects (2023+):**
- Vitest (fastest, best DX, ESM-first)
- Alternative: Jest (if team preference or legacy)
**Configuration:**
```typescript
// vitest.config.ts
export default defineConfig({
test: {
globals: true,
environment: 'node', // or 'jsdom' for browser code
setupFiles: ['./tests/setup.ts'],
coverage: {
provider: 'v8',
include: ['src/**/*.ts'],
exclude: ['**/*.test.ts', '**/*.config.ts']
}
}
})
Preferred:
Configuration:
[pytest]
testpaths = tests
addopts =
-v
--cov=src
--cov-report=term-missing
--cov-report=html
--strict-markers
Built-in testing is excellent:
go test ./... # Run all tests
go test -v ./... # Verbose
go test -cover ./... # With coverage
go test -race ./... # Race detection
Built-in testing:
cargo test # Run all tests
cargo test --verbose # Verbose output
cargo test --release # Optimized builds
If the project follows TDD (has test-driven-development skill), ensure fixes maintain TDD principles:
This skill enforces the verification-before-completion principle:
For large test suites, enable parallel execution:
Vitest:
vitest run --threads
pytest:
pytest -n auto # Requires pytest-xdist
Go:
go test -parallel 8 ./...
After setting up tests, suggest CI integration:
User: "Run the tests"
Workflow:
"test": "vitest run"npm testUser: "Run the tests"
Workflow:
User: "Run the tests"
Workflow:
pytestcalculate_total() functioncalculate_total()User: "Run the tests"
Workflow:
For iterative development, suggest watch mode:
npm run test:watch
# or
pytest --watch
Set up coverage enforcement:
{
"test": {
"coverage": {
"thresholds": {
"lines": 80,
"functions": 80,
"branches": 80,
"statements": 80
}
}
}
}
Suggest organizing tests by type:
tests/
āāā unit/ # Fast, isolated tests
āāā integration/ # Tests with dependencies
āāā e2e/ # Full system tests
Run specific suites:
npm test -- tests/unit
npm test -- tests/integration
Run-the-Tests Skill v1.0
Master authentication and authorization patterns including JWT, OAuth2, session management, and RBAC to build secure, scalable access control systems. Use when implementing auth systems, securing APIs, or debugging security issues.