Write and organize tests for scientific Python packages using pytest following Scientific Python community best practices. Use when setting up test suites, writing unit tests, integration tests, testing numerical algorithms, configuring test fixtures, parametrizing tests, or setting up continuous integration. Ideal for testing scientific computations, validating numerical accuracy, and ensuring code correctness.
/plugin marketplace add uw-ssec/rse-plugins/plugin install uw-ssec-scientific-python-development-plugins-scientific-python-development@uw-ssec/rse-pluginsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
assets/conftest-example.pyassets/github-actions-tests.ymlassets/pyproject-pytest.tomlreferences/COMMON_PITFALLS.mdreferences/SCIENTIFIC_PATTERNS.mdreferences/TEST_PATTERNS.mdA comprehensive guide to writing effective tests for scientific Python packages using pytest, following the Scientific Python Community guidelines and testing tutorial. This skill focuses on modern testing patterns, fixtures, parametrization, and best practices specific to scientific computing.
Common Testing Tasks - Quick Decisions:
# 1. Basic test → Use simple assert
def test_function():
assert result == expected
# 2. Floating-point comparison → Use approx
from pytest import approx
assert result == approx(0.333, rel=1e-6)
# 3. Testing exceptions → Use pytest.raises
with pytest.raises(ValueError, match="must be positive"):
function(-1)
# 4. Multiple inputs → Use parametrize
@pytest.mark.parametrize("input,expected", [(1,1), (2,4), (3,9)])
def test_square(input, expected):
assert input**2 == expected
# 5. Reusable setup → Use fixture
@pytest.fixture
def sample_data():
return np.array([1, 2, 3, 4, 5])
# 6. NumPy arrays → Use approx or numpy.testing
assert np.mean(data) == approx(3.0)
Decision Tree:
pytest is the de facto standard for testing Python packages because it:
assert statementStandard test directory layout:
my-package/
├── src/
│ └── my_package/
│ ├── __init__.py
│ ├── analysis.py
│ └── utils.py
├── tests/
│ ├── conftest.py
│ ├── test_analysis.py
│ └── test_utils.py
└── pyproject.toml
Key principles:
src/)test_*.py (pytest discovery)test_* (pytest discovery)__init__.py in tests directory (avoid importability issues)See assets/pyproject-pytest.toml for a complete pytest configuration example.
Basic configuration in pyproject.toml:
[tool.pytest.ini_options]
minversion = "7.0"
addopts = [
"-ra", # Show summary of all test outcomes
"--showlocals", # Show local variables in tracebacks
"--strict-markers", # Error on undefined markers
"--strict-config", # Error on config issues
]
testpaths = ["tests"]
Following the Scientific Python testing recommendations, effective testing provides multiple benefits and should follow key principles:
1. Any test case is better than none
When in doubt, write the test that makes sense at the time:
Don't get bogged down in taxonomy when learning—focus on writing tests that work.
2. As long as that test is correct
It's surprisingly easy to write tests that pass when they should fail:
3. Start with Public Interface Tests
Begin by testing from the perspective of a user:
4. Organize Tests into Suites
Divide tests by type and execution time for efficiency:
Benefits:
The recommended approach is outside-in, starting from the user's perspective:
This approach ensures you're building the right thing before optimizing implementation details.
# tests/test_basic.py
def test_simple_math():
"""Test basic arithmetic."""
assert 4 == 2**2
def test_string_operations():
"""Test string methods."""
result = "hello world".upper()
assert result == "HELLO WORLD"
assert "HELLO" in result
# tests/test_scientific.py
import numpy as np
from pytest import approx
from my_package.analysis import compute_mean, fit_linear
def test_compute_mean():
"""Test mean calculation."""
data = np.array([1.0, 2.0, 3.0, 4.0, 5.0])
result = compute_mean(data)
assert result == approx(3.0)
def test_fit_linear():
"""Test linear regression."""
x = np.array([0, 1, 2, 3, 4])
y = np.array([0, 2, 4, 6, 8])
slope, intercept = fit_linear(x, y)
assert slope == approx(2.0)
assert intercept == approx(0.0)
See references/TEST_PATTERNS.md for detailed patterns including:
See references/SCIENTIFIC_PATTERNS.md for scientific-specific patterns:
# Run all tests
pytest
# Run specific file
pytest tests/test_analysis.py
# Run specific test
pytest tests/test_analysis.py::test_mean
# Run tests matching pattern
pytest -k "mean or median"
# Verbose output
pytest -v
# Show local variables in failures
pytest -l # or --showlocals
# Stop at first failure
pytest -x
# Show stdout/stderr
pytest -s
# Drop into debugger on failure
pytest --pdb
# Drop into debugger at start of each test
pytest --trace
# Run last failed tests
pytest --lf
# Run failed tests first, then rest
pytest --ff
# Show which tests would be run (dry run)
pytest --collect-only
# Install pytest-cov
pip install pytest-cov
# Run with coverage
pytest --cov=my_package
# With coverage report
pytest --cov=my_package --cov-report=html
# With missing lines
pytest --cov=my_package --cov-report=term-missing
# Fail if coverage below threshold
pytest --cov=my_package --cov-fail-under=90
See assets/pyproject-pytest.toml for complete coverage configuration.
Ready-to-use templates are available in the assets/ directory:
See references/COMMON_PITFALLS.md for solutions to:
tests/ directory separate from sourcetest_*.pytest_*pyproject.tomlpytest.approx for floating-point comparisonspytest.raisespytest.warns@pytest.mark.slowSee assets/github-actions-tests.yml for a complete GitHub Actions workflow example.
Testing scientific Python code with pytest, following Scientific Python community principles, provides:
Key testing principles:
Remember: Any test is better than none, but well-organized tests following these principles create trustworthy, maintainable scientific software that the community can rely on.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.