npx claudepluginhub mbwsims/claude-universe --plugin universeThis skill is limited to using the following tools:
Produce a structured test plan without writing any test code. Analyze the target code,
Generates unit, integration, component, and e2e test suites with mocking strategies, edge case coverage, descriptive naming, and CI integration patterns. Activates on 'write tests', 'unit tests', 'mocking' requests.
Generates test files, mocking strategies, coverage analysis, test architectures, plans, and defect reports for functional, performance, security testing. Use for unit/integration/E2E tests, QA automation, flaky test debugging.
Provides test design patterns, coverage strategies (80-100% targets), types (unit/integration/E2E), organization, and best practices for comprehensive test suites. Use for new suites, coverage improvement, or test design.
Share bugs, ideas, or general feedback.
Produce a structured test plan without writing any test code. Analyze the target code, decompose its input space, recommend test architecture, and identify mock boundaries. The output is a table that reveals gaps in the developer's mental model of what needs testing.
Useful for:
If testkit_map is available, call it to understand which functions already have tests
and which need plans. This avoids planning for code that's already well-tested.
If relevant tests already exist, call testkit_analyze on those files to find shallow
assertions, missing error coverage, and other gaps that the new plan should close.
If testkit_map is unavailable, note: "Running without testkit-mcp — discovering test
coverage manually. Install the testkit MCP server for automated coverage mapping." Then
manually Glob for existing test files and read them to understand current coverage.
Read the code to be tested. If a specific function was named, focus on that. If a file or module was named, analyze each exported function/class.
For each function/class, document:
Apply input space analysis (see skills/test/references/input-space-analysis.md
for the full methodology) to produce a table for each function:
### Input Space — {functionName}({params})
| Category | Input | Expected | Priority |
|----------|-------|----------|----------|
| Canonical | {typical valid input} | {expected output} | must |
| Empty | {empty/zero variant} | {error or default} | must |
| Boundary | {at threshold} | {edge behavior} | must |
| Null | {null/undefined/None} | {error} | must |
| Invalid | {wrong type/value} | {error} | should |
| Adversarial | {attack input} | {safe handling} | should |
| Combinatorial | {conflicting options} | {defined behavior} | nice |
Categories map to the reference type tables: Strings (empty, whitespace, unicode, special chars), Numbers (zero, negative, NaN, float precision), Arrays (empty, single, large, nulls), Objects (empty, missing keys, extra keys, wrong types), Dates (epoch, DST, leap year), Async (resolve, reject, timeout, cancel), Stateful (initial, mutated, concurrent).
Mark each row:
Based on the code archetype (see skills/test/references/test-architecture.md):
Explicitly state:
Output format:
## Test Plan — {function/module name}
### Contract
{Summary of what the code promises}
### Input Space
{Table from step 3}
### Architecture
{Unit / Integration / Both — and why}
### Mock Boundary
- Mock: {list of external dependencies to mock}
- Real: {list of internal modules to test through}
### Summary
{N} tests planned: {n} must, {n} should, {n} nice-to-have
For a module with multiple functions, produce one plan per function, then a summary of total tests planned.
/test) and get a complete set of tests written from it.formatDate function needs 3 tests, not 15./test — Hand the plan to /test to generate the actual test code/test-review — Use to grade existing tests against the planreferences/plan-templates.md — Pre-filled input space templates by code archetype