Universal anti-patterns and test standards. Activates when reviewing test quality, writing tests, or auditing code for common mistakes. Covers the 10 most damaging anti-patterns with BAD/GOOD examples, test validity criteria, and mocking guidelines.
From sentinelnpx claudepluginhub digistrique-solutions/sentinelThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Universal anti-patterns to avoid and test standards to follow. These patterns are language-agnostic in principle, with examples in Python and TypeScript for concreteness.
A test MUST satisfy all four criteria:
is not None or is TrueIf a test fails any of these criteria, it provides false confidence and should be rewritten.
BAD -- test mocks the function under test and asserts the mock was called:
async def test_classifier():
classifier = MagicMock()
classifier.classify.return_value = ClassificationResult(needs_planning=True)
result = await classifier.classify("anything")
assert result.needs_planning is True
This test passes even if the real classifier is deleted from the codebase.
GOOD -- test calls the real function with controlled inputs:
async def test_classify_multi_step_query_needs_planning():
classifier = ComplexityClassifier()
result = await classifier.classify("Audit my records and create a report")
assert result.needs_planning is True
assert result.estimated_steps >= 2
BAD -- passes for any non-None value:
result = await service.get_items(org_id)
assert result is not None
GOOD -- asserts specific, meaningful outcomes:
result = await service.get_items(org_id)
assert len(result) == 3
assert result[0].status == "ACTIVE"
assert result[0].org_id == org_id
BAD -- test reimplements the business logic:
def test_calculate_rate():
clicks, impressions = 50, 1000
expected = clicks / impressions * 100 # duplicated logic
assert calculate_rate(clicks, impressions) == expected
GOOD -- test uses independently-derived expected values:
def test_calculate_rate():
assert calculate_rate(50, 1000) == 5.0 # known correct answer
assert calculate_rate(0, 1000) == 0.0
assert calculate_rate(50, 0) == 0.0 # edge case: division by zero
BAD -- one test with valid inputs:
def test_create_item():
result = create_item(name="Test", budget=100)
assert result.id is not None
GOOD -- tests for valid, invalid, edge cases, and error conditions:
def test_create_item_valid():
result = create_item(name="Test", budget=100)
assert result.id is not None
assert result.name == "Test"
def test_create_item_empty_name_raises():
with pytest.raises(ValueError, match="name cannot be empty"):
create_item(name="", budget=100)
def test_create_item_negative_budget_raises():
with pytest.raises(ValueError, match="budget must be positive"):
create_item(name="Test", budget=-1)
def test_create_item_zero_budget_raises():
with pytest.raises(ValueError, match="budget must be positive"):
create_item(name="Test", budget=0)
BAD -- swallowing errors to make tests pass:
def test_flaky_api_call():
try:
result = api.fetch_data()
assert result.status == "ok"
except Exception:
pass # "handles" intermittent failures
GOOD -- tests must be deterministic. Mock the external dependency:
def test_api_call_success(mock_http):
mock_http.get("/data").respond(200, json={"status": "ok"})
result = api.fetch_data()
assert result.status == "ok"
def test_api_call_failure(mock_http):
mock_http.get("/data").respond(500)
with pytest.raises(ApiError, match="failed to fetch"):
api.fetch_data()
BAD -- wrapping in try/except and returning a default:
def get_metrics(item_id: str):
try:
return metrics_service.fetch(item_id)
except Exception:
return {} # silently returns empty on any error
GOOD -- handle specific errors, let unexpected ones propagate:
def get_metrics(item_id: str):
try:
return metrics_service.fetch(item_id)
except MetricsNotFoundError:
logger.info("no_metrics_found", item_id=item_id)
return {}
except MetricsServiceError as e:
logger.error("metrics_fetch_failed", item_id=item_id, error=str(e))
raise
BAD -- adding escape hatches instead of fixing the real problem:
def process_item(item, skip_validation=False, force=False, ignore_limit=False):
if not skip_validation:
validate(item)
...
GOOD -- fix the validation or adjust the input:
def process_item(item: Item):
validate(item) # always validates
...
If validation fails for a legitimate case, fix the validation rules -- do not bypass them.
BAD -- copy-pasting a function and changing two lines:
def get_items_from_source_a(account_id): ... # 40 lines
def get_items_from_source_b(account_id): ... # 39 nearly identical lines
GOOD -- parameterize or use strategy pattern:
def get_items(account_id: str, source: DataSource) -> list[Item]:
client = source.get_client()
return client.list_items(account_id)
BAD -- casting to Any to silence the type checker:
result: Any = service.get_data() # avoids dealing with the actual type
GOOD -- understand and fix the type mismatch:
result: ItemMetrics = service.get_data()
# If the type does not exist yet, create it
BAD -- special-casing specific IDs:
if org_id == "org_3AXJh6Gv0tgaIJmHDfindZekbTV":
return special_handling()
GOOD -- use configuration, feature flags, or proper conditional logic:
if org.has_capability(Capability.ADVANCED_REPORTING):
return advanced_reporting(org)
Mock these:
# WRONG -- mocking the function you are testing
@patch("src.services.item_service.ItemService.get_items")
async def test_get_items(mock_get):
mock_get.return_value = [...]
result = await ItemService().get_items("org_123")
# This tests nothing
# RIGHT -- mocking the external dependency the function calls
@patch("src.services.item_service.ExternalApiClient")
async def test_get_items(mock_client):
mock_client.return_value.list_items.return_value = [item_fixture()]
service = ItemService(client=mock_client.return_value)
result = await service.get_items("org_123")
assert len(result) == 1
assert result[0].name == "Test Item"
Test names should describe behavior, not implementation:
# BAD -- describes implementation
def test_get_items_calls_api()
def test_process_returns_dict()
# GOOD -- describes behavior
def test_get_items_returns_active_items_for_org()
def test_process_rejects_expired_item_with_error()
def test_classify_marks_multi_step_query_as_planning_needed()
If you feel the urge to use any anti-pattern, STOP and ask yourself:
If the answer to #1 is "easier" or #2 is "yes", discuss with the user before proceeding.