Python testing patterns and best practices using pytest, mocking, and property-based testing. Use when writing unit tests, integration tests, or implementing test-driven development in Python projects.
Provides pytest patterns for writing unit, integration, and async tests with fixtures, mocking, and property-based testing. Use when creating test suites or implementing TDD in Python projects.
/plugin marketplace add NickCrew/claude-ctx-plugin/plugin install nickcrew-claude-ctx@NickCrew/claude-ctx-pluginThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/async-testing.mdreferences/best-practices.mdreferences/coverage.mdreferences/fixtures.mdreferences/integration-testing.mdreferences/mocking.mdreferences/monkeypatch.mdreferences/parametrized-tests.mdreferences/property-based-testing.mdreferences/pytest-fundamentals.mdreferences/test-organization.mdComprehensive guide to implementing robust testing strategies in Python using pytest, fixtures, mocking, parameterization, and property-based testing.
Test Discovery: Files matching test_*.py or *_test.py, functions starting with test_
Fixtures: Reusable test resources with setup and teardown
function (default), class, module, sessionconftest.py for project-wide availabilityAssertions: Use assert statements, pytest.raises() for exceptions
Organization: Separate unit/, integration/, e2e/ directories
Load detailed references for specific topics:
| Task | Reference File |
|---|---|
| Pytest basics, test structure, AAA pattern | skills/python-testing-patterns/references/pytest-fundamentals.md |
| Fixtures, scopes, setup/teardown, conftest.py | skills/python-testing-patterns/references/fixtures.md |
| Parametrization, multiple test cases | skills/python-testing-patterns/references/parametrized-tests.md |
| Mocking, patching, unittest.mock, pytest-mock | skills/python-testing-patterns/references/mocking.md |
| Async tests, pytest-asyncio, event loops | skills/python-testing-patterns/references/async-testing.md |
| Property-based testing, Hypothesis, strategies | skills/python-testing-patterns/references/property-based-testing.md |
| Monkeypatch, environment variables, attributes | skills/python-testing-patterns/references/monkeypatch.md |
| Test structure, markers, conftest.py patterns | skills/python-testing-patterns/references/test-organization.md |
| Coverage measurement, reports, thresholds | skills/python-testing-patterns/references/coverage.md |
| Database, API, Redis, message queue testing | skills/python-testing-patterns/references/integration-testing.md |
| Best practices, test quality, fixture design | skills/python-testing-patterns/references/best-practices.md |
# test_example.py
import pytest
def test_something():
"""Descriptive test name."""
# Arrange
expected = 5
# Act
result = 2 + 3
# Assert
assert result == expected
Run tests:
pytest # Run all tests
pytest -v # Verbose output
pytest tests/unit/ # Specific directory
pytest -k "test_user" # Match pattern
pytest -m unit # Run marked tests
@pytest.fixture
def sample_data():
"""Provide test data."""
data = {"key": "value"}
yield data
# Cleanup if needed
def test_with_fixture(sample_data):
assert sample_data["key"] == "value"
@pytest.mark.parametrize("input,expected", [
(2, 4),
(3, 9),
(4, 16),
])
def test_square(input, expected):
assert input ** 2 == expected
from unittest.mock import patch
@patch("module.external_api_call")
def test_with_mock(mock_api):
mock_api.return_value = {"status": "ok"}
result = my_function()
assert result["status"] == "ok"
mock_api.assert_called_once()
pytest --cov=src --cov-report=term-missing
pytest --cov=src --cov-report=html
pytest --cov=src --cov-fail-under=80
pytest.ini:
[pytest]
testpaths = tests
python_files = test_*.py
addopts = -v --strict-markers --cov=src
markers =
unit: Unit tests
integration: Integration tests
slow: Slow tests
Exception testing:
with pytest.raises(ValueError, match="error message"):
function_that_raises()
Async testing:
@pytest.mark.asyncio
async def test_async_function():
result = await async_operation()
assert result is not None
Temporary files:
def test_file_operation(tmp_path):
test_file = tmp_path / "test.txt"
test_file.write_text("content")
assert test_file.read_text() == "content"
Markers for test selection:
@pytest.mark.slow
@pytest.mark.integration
def test_database_operation():
pass
Not using fixtures: Repeating setup code across tests
Tests depending on order: Global state pollution
Over-mocking: Mocking internal implementation
Missing edge cases: Only testing happy path
Slow tests: Running full integration tests frequently
Ignoring coverage gaps: Not measuring test coverage
Poor test names: Generic names like test_1()
test_<behavior>_<condition>_<expected>No cleanup: Resources not released
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.