From claude-commands
Generates systematic test suites for any system component via structured methodology covering state lifecycles, cross-component interactions, and logic consistency to catch bugs.
npx claudepluginhub jleechanorg/claude-commandsThis skill uses the workspace's default tool permissions.
Generate comprehensive test suites for any system component by systematically analyzing failure modes, state transitions, and integration points. This methodology is derived from the successful modal state management test suite that caught routing consistency bugs.
Provides Ktor server patterns for routing DSL, plugins (auth, CORS, serialization), Koin DI, WebSockets, services, and testApplication testing.
Conducts multi-source web research with firecrawl and exa MCPs: searches, scrapes pages, synthesizes cited reports. For deep dives, competitive analysis, tech evaluations, or due diligence.
Provides demand forecasting, safety stock optimization, replenishment planning, and promotional lift estimation for multi-location retailers managing 300-800 SKUs.
Generate comprehensive test suites for any system component by systematically analyzing failure modes, state transitions, and integration points. This methodology is derived from the successful modal state management test suite that caught routing consistency bugs.
Analyze the system you're testing:
Ask clarifying questions:
Ask these questions one at a time:
Test Organization: Single file, modular by concern, or hybrid with base class?
Test Data Strategy: Factory functions, builder pattern, or fixture dataclasses?
Component Coverage: Which components/subsystems to prioritize?
Assertion Strategy: Direct assertions, snapshot-based, or custom semantic helpers?
Test Execution Level: Pure unit tests, integration via public APIs, or hybrid?
Advanced Testing: Include property-based tests now or later?
State Lifecycle Tests
Cross-Component Interaction Tests
Logic Consistency Tests
Property-Based Invariants (document for later)
File Structure Pattern:
tests/
├── test_<component>_base.py # Base class + fixtures + assertions
├── test_<component>_lifecycle.py # State transition tests
└── test_<component>_integration.py # Cross-component + consistency tests
Base Class Components:
@dataclass
class TestScenario:
name: str
initial_state: dict
action: Callable | str
expected_state: dict
description: str = ""
class TestBase(unittest.TestCase):
# Custom assertion helpers (semantic)
def assert_no_component_active(self, state): ...
def assert_only_component_active(self, state, name): ...
def assert_stale_flags_cleared(self, state, component): ...
def assert_logic_consistency(self, state, action): ...
# Scenario execution
def run_scenario(self, scenario: TestScenario): ...
# Test fixtures
def create_base_state(self, **kwargs): ...
Test Pattern:
def test_<bug_or_behavior_description>(self):
"""
BUG FIX TEST or BEHAVIOR TEST:
Clear description of what this verifies and why it matters.
"""
scenario = TestScenario(
name="descriptive_name",
initial_state=self.create_base_state(...),
action="user_action" or callable,
expected_state={...},
description="Why this test matters"
)
self.run_scenario(scenario)
Tests must:
Run tests and verify:
pytest tests/test_<component>_*.py -v
1. Test Real Failure Modes
2. Use Semantic Assertions
assert_no_modal_active() > assert state["flag"] is None3. Declarative Test Scenarios
4. Integration Over Unit
5. Design for Extensibility
Context: Modal routing system with character creation, level-up, and campaign upgrade modals.
Bugs Found:
level_up_in_progress=False blocking future level-upscharacter_creation_in_progress not cleared on level-up exitTest Suite Created:
Files:
test_modal_base.py: Base class, fixtures, assertionstest_modal_state_lifecycle.py: 11 state transition teststest_modal_integration.py: 11 cross-modal and consistency testsDocumentation:
Maintenance:
State Transition Pattern:
def test_state_clears_stale_flags():
"""When condition X triggers, stale flags Y should be removed."""
# initial_state with stale flags
# action that triggers new condition
# verify flags removed (not just set False)
Cross-Component Pattern:
def test_component_exit_cleans_all_flags():
"""Exiting component A clears flags from ALL components."""
# initial_state with mixed flags
# exit action
# verify ALL component flags cleared
Consistency Pattern:
def test_decision_points_agree():
"""Decision point A and B must use identical logic."""
# state that could be ambiguous
# check both decision points
# verify they agree on outcome
When to use this skill:
When NOT to use:
Success metrics:
/tdd for test-first development/review-enhanced to validate test coverage/fake3 to detect test quality issues/solid to ensure testable code design