Generate comprehensive, systematic test suites for any system component using a structured test design methodology. Creates tests that catch real bugs through state lifecycle, integration, and consistency testing.
Generates comprehensive test suites for system components using structured state lifecycle and integration testing methodologies.
/plugin marketplace add jleechanorg/claude-commands/plugin install claude-commands@claude-commands-marketplaceThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Generate comprehensive test suites for any system component by systematically analyzing failure modes, state transitions, and integration points. This methodology is derived from the successful modal state management test suite that caught routing consistency bugs.
Analyze the system you're testing:
Ask clarifying questions:
Ask these questions one at a time:
Test Organization: Single file, modular by concern, or hybrid with base class?
Test Data Strategy: Factory functions, builder pattern, or fixture dataclasses?
Component Coverage: Which components/subsystems to prioritize?
Assertion Strategy: Direct assertions, snapshot-based, or custom semantic helpers?
Test Execution Level: Pure unit tests, integration via public APIs, or hybrid?
Advanced Testing: Include property-based tests now or later?
State Lifecycle Tests
Cross-Component Interaction Tests
Logic Consistency Tests
Property-Based Invariants (document for later)
File Structure Pattern:
tests/
├── test_<component>_base.py # Base class + fixtures + assertions
├── test_<component>_lifecycle.py # State transition tests
└── test_<component>_integration.py # Cross-component + consistency tests
Base Class Components:
@dataclass
class TestScenario:
name: str
initial_state: dict
action: Callable | str
expected_state: dict
description: str = ""
class TestBase(unittest.TestCase):
# Custom assertion helpers (semantic)
def assert_no_component_active(self, state): ...
def assert_only_component_active(self, state, name): ...
def assert_stale_flags_cleared(self, state, component): ...
def assert_logic_consistency(self, state, action): ...
# Scenario execution
def run_scenario(self, scenario: TestScenario): ...
# Test fixtures
def create_base_state(self, **kwargs): ...
Test Pattern:
def test_<bug_or_behavior_description>(self):
"""
BUG FIX TEST or BEHAVIOR TEST:
Clear description of what this verifies and why it matters.
"""
scenario = TestScenario(
name="descriptive_name",
initial_state=self.create_base_state(...),
action="user_action" or callable,
expected_state={...},
description="Why this test matters"
)
self.run_scenario(scenario)
Tests must:
Run tests and verify:
pytest tests/test_<component>_*.py -v
1. Test Real Failure Modes
2. Use Semantic Assertions
assert_no_modal_active() > assert state["flag"] is None3. Declarative Test Scenarios
4. Integration Over Unit
5. Design for Extensibility
Context: Modal routing system with character creation, level-up, and campaign upgrade modals.
Bugs Found:
level_up_in_progress=False blocking future level-upscharacter_creation_in_progress not cleared on level-up exitTest Suite Created:
Files:
test_modal_base.py: Base class, fixtures, assertionstest_modal_state_lifecycle.py: 11 state transition teststest_modal_integration.py: 11 cross-modal and consistency testsDocumentation:
Maintenance:
State Transition Pattern:
def test_state_clears_stale_flags():
"""When condition X triggers, stale flags Y should be removed."""
# initial_state with stale flags
# action that triggers new condition
# verify flags removed (not just set False)
Cross-Component Pattern:
def test_component_exit_cleans_all_flags():
"""Exiting component A clears flags from ALL components."""
# initial_state with mixed flags
# exit action
# verify ALL component flags cleared
Consistency Pattern:
def test_decision_points_agree():
"""Decision point A and B must use identical logic."""
# state that could be ambiguous
# check both decision points
# verify they agree on outcome
When to use this skill:
When NOT to use:
Success metrics:
/tdd for test-first development/review-enhanced to validate test coverage/fake3 to detect test quality issues/solid to ensure testable code designActivates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.