From nickcrew-claude-ctx-plugin
Provides Python testing patterns and best practices with pytest, fixtures, mocking, parametrization, and Hypothesis property-based testing for unit/integration tests and TDD.
npx claudepluginhub nickcrew/claude-cortexThis skill uses the workspace's default tool permissions.
Comprehensive guide to implementing robust testing strategies in Python using pytest, fixtures, mocking, parameterization, and property-based testing.
references/async-testing.mdreferences/best-practices.mdreferences/coverage.mdreferences/fixtures.mdreferences/integration-testing.mdreferences/mocking.mdreferences/monkeypatch.mdreferences/parametrized-tests.mdreferences/property-based-testing.mdreferences/pytest-fundamentals.mdreferences/test-organization.mdvalidation/rubric.yamlImplements pytest-based testing strategies for Python with fixtures, mocking, parameterization, TDD, unit/integration tests, and CI/CD. Use for writing tests, test suites, or debugging failures.
Implements pytest testing patterns including fixtures, mocking, parameterization, TDD, unit tests, integration tests, and async code for Python.
Implements Python testing strategies with pytest, fixtures, mocking, parameterization, and TDD. Use for unit tests, integration tests, async code, and CI/CD setups.
Share bugs, ideas, or general feedback.
Comprehensive guide to implementing robust testing strategies in Python using pytest, fixtures, mocking, parameterization, and property-based testing.
Test Discovery: Files matching test_*.py or *_test.py, functions starting with test_
Fixtures: Reusable test resources with setup and teardown
function (default), class, module, sessionconftest.py for project-wide availabilityAssertions: Use assert statements, pytest.raises() for exceptions
Organization: Separate unit/, integration/, e2e/ directories
Load detailed references for specific topics:
| Task | Reference File |
|---|---|
| Pytest basics, test structure, AAA pattern | skills/python-testing-patterns/references/pytest-fundamentals.md |
| Fixtures, scopes, setup/teardown, conftest.py | skills/python-testing-patterns/references/fixtures.md |
| Parametrization, multiple test cases | skills/python-testing-patterns/references/parametrized-tests.md |
| Mocking, patching, unittest.mock, pytest-mock | skills/python-testing-patterns/references/mocking.md |
| Async tests, pytest-asyncio, event loops | skills/python-testing-patterns/references/async-testing.md |
| Property-based testing, Hypothesis, strategies | skills/python-testing-patterns/references/property-based-testing.md |
| Monkeypatch, environment variables, attributes | skills/python-testing-patterns/references/monkeypatch.md |
| Test structure, markers, conftest.py patterns | skills/python-testing-patterns/references/test-organization.md |
| Coverage measurement, reports, thresholds | skills/python-testing-patterns/references/coverage.md |
| Database, API, Redis, message queue testing | skills/python-testing-patterns/references/integration-testing.md |
| Best practices, test quality, fixture design | skills/python-testing-patterns/references/best-practices.md |
# test_example.py
import pytest
def test_something():
"""Descriptive test name."""
# Arrange
expected = 5
# Act
result = 2 + 3
# Assert
assert result == expected
Run tests:
pytest # Run all tests
pytest -v # Verbose output
pytest tests/unit/ # Specific directory
pytest -k "test_user" # Match pattern
pytest -m unit # Run marked tests
@pytest.fixture
def sample_data():
"""Provide test data."""
data = {"key": "value"}
yield data
# Cleanup if needed
def test_with_fixture(sample_data):
assert sample_data["key"] == "value"
@pytest.mark.parametrize("input,expected", [
(2, 4),
(3, 9),
(4, 16),
])
def test_square(input, expected):
assert input ** 2 == expected
from unittest.mock import patch
@patch("module.external_api_call")
def test_with_mock(mock_api):
mock_api.return_value = {"status": "ok"}
result = my_function()
assert result["status"] == "ok"
mock_api.assert_called_once()
pytest --cov=src --cov-report=term-missing
pytest --cov=src --cov-report=html
pytest --cov=src --cov-fail-under=80
pytest.ini:
[pytest]
testpaths = tests
python_files = test_*.py
addopts = -v --strict-markers --cov=src
markers =
unit: Unit tests
integration: Integration tests
slow: Slow tests
Exception testing:
with pytest.raises(ValueError, match="error message"):
function_that_raises()
Async testing:
@pytest.mark.asyncio
async def test_async_function():
result = await async_operation()
assert result is not None
Temporary files:
def test_file_operation(tmp_path):
test_file = tmp_path / "test.txt"
test_file.write_text("content")
assert test_file.read_text() == "content"
Markers for test selection:
@pytest.mark.slow
@pytest.mark.integration
def test_database_operation():
pass
Not using fixtures: Repeating setup code across tests
Tests depending on order: Global state pollution
Over-mocking: Mocking internal implementation
Missing edge cases: Only testing happy path
Slow tests: Running full integration tests frequently
Ignoring coverage gaps: Not measuring test coverage
Poor test names: Generic names like test_1()
test_<behavior>_<condition>_<expected>No cleanup: Resources not released