This skill should be used when the user asks to "write tests", "set up pytest", "add test fixtures", "parametrize tests", "test numerical code", "configure pytest", "add coverage", "write unit tests", "write integration tests", "test with numpy", "fix failing tests", or needs guidance on pytest configuration, test organization, scientific Python testing patterns, numerical algorithm testing, fixture design, or continuous integration for tests.
Creates pytest-based tests for scientific Python code following community guidelines and best practices.
npx claudepluginhub uw-ssec/rse-pluginsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
assets/conftest-example.pyassets/github-actions-tests.ymlassets/pyproject-pytest.tomlreferences/COMMON_PITFALLS.mdreferences/SCIENTIFIC_PATTERNS.mdreferences/TEST_PATTERNS.mdA comprehensive guide to writing effective tests for scientific Python packages using pytest, following the Scientific Python Community guidelines and testing tutorial. This skill focuses on modern testing patterns, fixtures, parametrization, and best practices specific to scientific computing.
Common Testing Tasks - Quick Decisions:
# 1. Basic test → Use simple assert
def test_function():
assert result == expected
# 2. Floating-point comparison → Use approx
from pytest import approx
assert result == approx(0.333, rel=1e-6)
# 3. Testing exceptions → Use pytest.raises
with pytest.raises(ValueError, match="must be positive"):
function(-1)
# 4. Multiple inputs → Use parametrize
@pytest.mark.parametrize("input,expected", [(1,1), (2,4), (3,9)])
def test_square(input, expected):
assert input**2 == expected
# 5. Reusable setup → Use fixture
@pytest.fixture
def sample_data():
return np.array([1, 2, 3, 4, 5])
# 6. NumPy arrays → Use approx or numpy.testing
assert np.mean(data) == approx(3.0)
Decision Tree:
pytest is the de facto standard for testing Python packages because it:
assert statementStandard test directory layout:
my-package/
├── src/
│ └── my_package/
│ ├── __init__.py
│ ├── analysis.py
│ └── utils.py
├── tests/
│ ├── conftest.py
│ ├── test_analysis.py
│ └── test_utils.py
└── pyproject.toml
Key principles:
src/)test_*.py (pytest discovery)test_* (pytest discovery)__init__.py in tests directory (avoid importability issues)See assets/pyproject-pytest.toml for a complete pytest configuration example.
Basic configuration in pyproject.toml:
[tool.pytest.ini_options]
minversion = "7.0"
addopts = [
"-ra", # Show summary of all test outcomes
"--showlocals", # Show local variables in tracebacks
"--strict-markers", # Error on undefined markers
"--strict-config", # Error on config issues
]
testpaths = ["tests"]
Following the Scientific Python testing recommendations, effective testing provides multiple benefits and should follow key principles:
1. Any test case is better than none
When in doubt, write the test that makes sense at the time:
Don't get bogged down in taxonomy when learning—focus on writing tests that work.
2. As long as that test is correct
It's surprisingly easy to write tests that pass when they should fail:
3. Start with Public Interface Tests
Begin by testing from the perspective of a user:
4. Organize Tests into Suites
Divide tests by type and execution time for efficiency:
Benefits:
The recommended approach is outside-in, starting from the user's perspective:
This approach ensures you're building the right thing before optimizing implementation details.
# tests/test_basic.py
def test_simple_math():
"""Test basic arithmetic."""
assert 4 == 2**2
def test_string_operations():
"""Test string methods."""
result = "hello world".upper()
assert result == "HELLO WORLD"
assert "HELLO" in result
# tests/test_scientific.py
import numpy as np
from pytest import approx
from my_package.analysis import compute_mean, fit_linear
def test_compute_mean():
"""Test mean calculation."""
data = np.array([1.0, 2.0, 3.0, 4.0, 5.0])
result = compute_mean(data)
assert result == approx(3.0)
def test_fit_linear():
"""Test linear regression."""
x = np.array([0, 1, 2, 3, 4])
y = np.array([0, 2, 4, 6, 8])
slope, intercept = fit_linear(x, y)
assert slope == approx(2.0)
assert intercept == approx(0.0)
See references/TEST_PATTERNS.md for detailed patterns including:
See references/SCIENTIFIC_PATTERNS.md for scientific-specific patterns:
# Run all tests
pytest
# Run specific file
pytest tests/test_analysis.py
# Run specific test
pytest tests/test_analysis.py::test_mean
# Run tests matching pattern
pytest -k "mean or median"
# Verbose output
pytest -v
# Show local variables in failures
pytest -l # or --showlocals
# Stop at first failure
pytest -x
# Show stdout/stderr
pytest -s
# Drop into debugger on failure
pytest --pdb
# Drop into debugger at start of each test
pytest --trace
# Run last failed tests
pytest --lf
# Run failed tests first, then rest
pytest --ff
# Show which tests would be run (dry run)
pytest --collect-only
# Install pytest-cov
pip install pytest-cov
# Run with coverage
pytest --cov=my_package
# With coverage report
pytest --cov=my_package --cov-report=html
# With missing lines
pytest --cov=my_package --cov-report=term-missing
# Fail if coverage below threshold
pytest --cov=my_package --cov-fail-under=90
See assets/pyproject-pytest.toml for complete coverage configuration.
Ready-to-use templates are available in the assets/ directory:
See references/COMMON_PITFALLS.md for solutions to:
tests/ directory separate from sourcetest_*.pytest_*pyproject.tomlpytest.approx for floating-point comparisonspytest.raisespytest.warns@pytest.mark.slowSee assets/github-actions-tests.yml for a complete GitHub Actions workflow example.
Testing scientific Python code with pytest, following Scientific Python community principles, provides:
Key testing principles:
Remember: Any test is better than none, but well-organized tests following these principles create trustworthy, maintainable scientific software that the community can rely on.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.