From python-services
Enforces pytest patterns including async testing, AsyncMock, fixtures, parametrization, and FastAPI TestClient for test/ directory only (NOT src/). Use when writing tests, generating test files, or implementing test helpers/mocks under test/.
npx claudepluginhub andercore-labs/claudes-kitchen --plugin python-servicesThis skill uses the workspace's default tool permissions.
**SCOPE:** Test code under `test/` only. Production code has separate standards.
Implements structured self-debugging workflow for AI agent failures: capture errors, diagnose patterns like loops or context overflow, apply contained recoveries, and generate introspection reports.
Monitors deployed URLs for regressions in HTTP status, console errors, performance metrics, content, network, and APIs after deploys, merges, or upgrades.
Provides React and Next.js patterns for component composition, compound components, state management, data fetching, performance optimization, forms, routing, and accessible UIs.
SCOPE: Test code under test/ only. Production code has separate standards.
Async unit test:
import pytest
from unittest.mock import AsyncMock
class TestUserService:
@pytest.fixture
def mock_repository(self) -> AsyncMock:
mock = AsyncMock(spec=UserRepositoryProtocol)
mock.save.return_value = User(id="1", email="john@example.com")
return mock
@pytest.fixture
def service(self, mock_repository: AsyncMock) -> UserService:
return UserService(repository=mock_repository)
@pytest.mark.asyncio
async def test_create_success(self, service: UserService):
"""should return user with id when valid dto is provided"""
# Given
dto = CreateUserDTO(name="John", email="john@example.com")
# When
user = await service.create(dto)
# Then
assert user.id == "1"
assert user.email == "john@example.com"
FastAPI integration test:
from httpx import AsyncClient
from fastapi.testclient import TestClient
@pytest.mark.asyncio
async def test_create_user_endpoint(client: AsyncClient):
"""should return 201 with user data when valid payload is posted"""
# Given
payload = {"name": "John", "email": "john@example.com"}
# When
response = await client.post("/users", json=payload)
# Then
assert response.status_code == 201
assert response.json()["email"] == "john@example.com"
Unit tests | Integration tests | fixtures | mocking | assertions
| Pattern | Status |
|---|---|
| test_module_name.py (mirrors src structure) | ✓ Correct |
| tests/ folder per module | ✓ Correct |
| test_*.py files | ✓ Correct |
| conftest.py for shared fixtures | ✓ Correct |
| Test class per production class | ✓ Correct |
| test_function_name pattern | ✓ Correct |
| Unorganized test folder | ✗ Violation |
| tests mixed with src/ | ✗ Violation |
| Single huge test file | ✗ Violation |
| No conftest.py for shared setup | ✗ Poor practice |
| One test class per file | ✗ Violation |
Correct structure:
test/
├── conftest.py
├── unit/
│ ├── domain/
│ │ └── test_user_entity.py
│ ├── service/
│ │ └── test_user_service.py
│ └── inbound/
│ └── test_user_controller.py
├── integration/
│ ├── conftest.py
│ └── test_user_workflow.py
└── e2e/
└── test_api.py
| Pattern | Status |
|---|---|
| pytest.fixture with scope | ✓ Correct |
| Fixture factory pattern | ✓ Correct |
| Shared fixtures in conftest.py | ✓ Correct |
| Setup/teardown with autouse | ✓ Correct |
| Inline test setup | ✗ Hard to reuse |
| Global variables | ✗ Mutation |
| Class setup/teardown | ✗ Use fixtures |
Correct fixture patterns:
# conftest.py
@pytest.fixture
def user_dto() -> CreateUserDTO:
return CreateUserDTO(name="John", email="john@example.com")
@pytest.fixture
def mock_config() -> Mock:
mock = Mock(spec=AppConfig)
mock.timeout = 5000
return mock
@pytest.fixture
def user_service(mock_config: Mock) -> UserService:
return UserService(config=mock_config)
# In test file:
def test_create_user(user_service: UserService, user_dto: CreateUserDTO):
result = user_service.create(user_dto)
assert result.is_success()
| Pattern | Status |
|---|---|
| AsyncMock for async functions | ✓ Correct |
| Mock with spec= | ✓ Type safe |
pytest-mock mocker fixture | ✓ Preferred over unittest.mock.patch |
| Dependency injection for mocks | ✓ Correct |
mocker.patch("module.Class") | ✓ Auto-restored after test |
| Mock return_value | ✓ For sync |
| Mock side_effect | ✓ For exceptions |
| Real implementations in unit tests | ✗ Violation |
| Hardcoded test doubles | ✗ Use fixtures |
| Mock in production code | ✗ CRITICAL |
| No isolation from I/O | ✗ Violation |
# pytest-mock via mocker fixture (preferred)
def test_sends_email(mocker: MockerFixture):
mock_send = mocker.patch("app.notifications.send_email")
service.notify(user_id="1")
mock_send.assert_called_once_with(email="user@example.com")
AsyncMock for async services:
from unittest.mock import AsyncMock
class TestUserService:
@pytest.fixture
def mock_repository(self) -> AsyncMock:
mock = AsyncMock(spec=UserRepositoryProtocol)
mock.save.return_value = User(id="1", email="test@example.com")
mock.exists.return_value = False
return mock
@pytest.mark.asyncio
async def test_create_delegates_to_repository(
self, mock_repository: AsyncMock
):
service = UserService(repository=mock_repository)
user = await service.create(dto)
assert user.id == "1"
mock_repository.save.assert_called_once()
**Mock exceptions:**
```python
@pytest.fixture
def mock_repository_with_error(self) -> AsyncMock:
mock = AsyncMock(spec=UserRepositoryProtocol)
mock.save.side_effect = DatabaseError("Connection failed")
return mock
@pytest.mark.asyncio
async def test_create_handles_db_error(self, service: UserService):
with pytest.raises(DatabaseError):
await service.create(dto)
| Pattern | Status |
|---|---|
| assert with expression | ✓ Correct |
| assert with message | ✓ Correct |
| pytest.raises(Exception) | ✓ Correct |
| Checking exact values | ✓ Correct |
| Mock call verification | ✓ Correct |
| unittest.assertEqual | ✗ Use assert |
| Multiple assertions per test | ✗ One concept per test |
| No assertions | ✗ CRITICAL |
Async assertion patterns:
@pytest.mark.asyncio
async def test_create_success():
# Single responsibility
user = await service.create(valid_dto)
assert user.id is not None, "Should create with ID"
assert user.email == valid_dto.email
@pytest.mark.asyncio
async def test_create_validates_email():
with pytest.raises(ValidationError) as exc_info:
await service.create(invalid_email_dto)
assert "email" in str(exc_info.value)
@pytest.mark.asyncio
async def test_duplicate_raises_error():
await service.create(dto)
with pytest.raises(DuplicateError):
await service.create(dto)
**Mock verification:**
```python
@pytest.mark.asyncio
async def test_calls_repository():
await service.create(dto)
mock_repository.save.assert_called_once()
mock_repository.exists.assert_called_once_with(dto.email)
Data-driven testing:
@pytest.mark.parametrize(
("id_type", "url_path", "id_value"),
[
("label_id", "agent/config", "label_123"),
("version_id", "agents/configurations", "v1"),
],
)
@pytest.mark.asyncio
async def test_fetch_with_different_ids(
id_type: str, url_path: str, id_value: str
):
result = await fetcher.fetch(id_type=id_type, id_value=id_value)
assert result is not None
@pytest.mark.parametrize("email,should_pass", [
("test@example.com", True),
("invalid", False),
("", False),
])
@pytest.mark.asyncio
async def test_email_validation(email: str, should_pass: bool):
if should_pass:
user = await service.create(CreateUserDTO(email=email))
assert user.email == email
else:
with pytest.raises(ValidationError):
await service.create(CreateUserDTO(email=email))
Fixture factories:
@pytest.fixture
def user_factory():
def _create_user(**overrides) -> User:
defaults = {
"id": "test-id",
"email": "test@example.com",
"name": "Test User",
}
return User(**{**defaults, **overrides})
return _create_user
@pytest.mark.asyncio
async def test_with_custom_email(user_factory):
user = user_factory(email="custom@example.com")
assert user.email == "custom@example.com"
| Element | Convention |
|---|---|
| Function name | test_<function>_<scenario>_<expected> |
| Docstring | "should <behaviour> when <condition>" (Andercore standard) |
| GWT body | # Given / # When / # Then comments in every test |
def test_create_user_with_duplicate_email_returns_error():
"""should raise DuplicateError when email already exists"""
# Given
existing = UserMother.with_email("dupe@example.com")
await repository.save(existing)
# When / Then
with pytest.raises(DuplicateError):
await service.create(CreateUserDTO(email="dupe@example.com"))
| Forbidden | Use instead |
|---|---|
test1, test_a, test_case | descriptive test_<fn>_<scenario>_<expected> |
| Missing docstring | "should X when Y" always present |
| Missing GWT comments | # Given / # When / # Then always present |
| Never Allowed | Use Instead |
|---|---|
| Actual database calls in unit tests | Mock repository |
| Real API calls | Mock client |
| Actual file I/O | Mock file system |
| Mocked AWS SDK clients (Kinesis/SQS/S3) | LocalStack via Testcontainers |
| Test data in source | Factories/fixtures |
| Mocks in src/ code | Only in test/ |
| Global test state | Fixtures |
| setUp/tearDown methods | pytest fixtures |
| Skipped tests without reason | Use pytest.mark.skip("reason") |
| sleep() for timing | Mock time or use fixtures |
| print() for debugging | Use pytest -s |
| Random test data | Builders with defaults |
| Tests that depend on order | Each test independent |
| Environment-specific tests | Mock environment |
| Hardcoded paths | Use fixtures |
All async with pytest-asyncio:
import pytest
from httpx import AsyncClient
from unittest.mock import AsyncMock
# Service test with AsyncMock
@pytest.mark.asyncio
async def test_create_user_service():
mock_repo = AsyncMock(spec=UserRepositoryProtocol)
mock_repo.save.return_value = User(id="1", email="test@example.com")
service = UserService(repository=mock_repo)
user = await service.create(dto)
assert user.id == "1"
mock_repo.save.assert_awaited_once()
# FastAPI endpoint test
@pytest.fixture
async def async_client():
async with AsyncClient(app=app, base_url="http://test") as client:
yield client
@pytest.mark.asyncio
async def test_create_user_endpoint(async_client: AsyncClient):
response = await async_client.post(
"/users",
json={"name": "John", "email": "john@example.com"}
)
assert response.status_code == 201
data = response.json()
assert data["email"] == "john@example.com"
AsyncMock verification:
mock_service.create.assert_awaited_once()
mock_service.create.assert_awaited_once_with(dto)
assert mock_service.create.await_count == 1
| Pattern | Status |
|---|---|
| 80%+ coverage target | ✓ Aim for |
| Critical paths 100% | ✓ Aim for |
| Uncovered by edge cases | ✗ Add tests |
| Coverage for coverage | ✗ Test valuable behavior |
CHECK:
1. File organization → test/ mirrors src/ structure
2. Fixture patterns → Shared in conftest.py, DI via fixtures
3. AsyncMock strategy → All async I/O uses AsyncMock with spec=
4. Isolation → Each test independent, no shared state
5. Assertions → At least one per test, clear messages
6. Naming → `test_<fn>_<scenario>_<expected>` + docstring `"should X when Y"`
6a. GWT → `# Given` / `# When` / `# Then` present in every test body
6b. pytest-mock → `mocker` fixture preferred over `unittest.mock.patch` directly
6c. AWS → LocalStack (not mocked SDK) for Kinesis/SQS/S3 integration tests
7. Setup/teardown → Use @pytest.fixture only
8. Test data → Factory pattern with **overrides
9. No real I/O → Database, API, file system all mocked
10. Parametrization → @pytest.mark.parametrize for data-driven tests
11. Async tests → @pytest.mark.asyncio on ALL async tests
12. Exception testing → pytest.raises() for error cases
13. Arrange/Act/Assert → Clear three-part structure
14. One concept → One scenario per test
15. Error cases → Test all exception paths
16. Edge cases → Test boundaries
17. No skipped tests → All tests runnable
18. AsyncMock verification → .assert_awaited_once() for async
19. Fixture scope → Session/module/function as appropriate
20. FastAPI testing → AsyncClient for endpoint tests
21. All pass → Run tests
ANY fail → REJECT with violation
| Phase | Action |
|---|---|
| 1. SCOPE | Extract from context: files, mode (informative | executive), sessionId |
| 2. VERIFY | Run ALL 21 checks on test code |
| 3. VIOLATIONS | Collect violations with file:line, check name, severity |
| 4. REPORT | ✓ Pass → Proceed | ✗ Fail → Return violations |
| 5. METRICS | Call mcp__agent-orchestrator__store-skill-metrics |
| 6. OUTPUT | Return JSON: violations[], fixRate, finalViolations |
Metrics Structure:
{
"sessionId": str,
"skill": "python-services:test-code-recipe",
"initialViolations": int,
"iterations": int,
"fixesApplied": int,
"finalViolations": int,
"mode": "informative" | "executive",
"duration": float
}
Output Format:
{
"status": "success" | "failed",
"violations": [
{
"file": "test/unit/test_user_service.py",
"line": 15,
"check": "Fixture patterns",
"violation": "Setup code in test function instead of fixture",
"severity": "warning"
}
],
"metrics": {
"initialViolations": 3,
"finalViolations": 0,
"fixRate": 1.0
}
}