From universe
Generates tests for files, functions, or modules that catch real bugs. Auto-detects framework via package.json or configs, analyzes code, writes test files autonomously.
npx claudepluginhub mbwsims/claude-universe --plugin universeThis skill is limited to using the following tools:
Generate tests that catch real bugs. Runs autonomously — detects the framework, analyzes
Generates test files from acceptance criteria, tasks, or source code. Auto-detects test framework, follows project conventions, and spawns parallel test-writer agents.
Generates tests for TypeScript, React, Vue, Python, Go, Rust, PHP projects via framework detection (Jest, Vitest, Pytest, etc.) and expert agent routing with auto-task creation.
Generates and verifies passing tests for files/functions/directories: happy path, edge cases, error paths. Matches project framework (Jest/Vitest/pytest etc.), auto-detects patterns, runs/fixes failures.
Share bugs, ideas, or general feedback.
Generate tests that catch real bugs. Runs autonomously — detects the framework, analyzes the code, and writes the test file. No intermediate questions or planning steps.
Complements the superpowers TDD skill. If TDD discipline is active (red-green-refactor cycle), defer to superpowers for the PROCESS of when to write vs. run vs. refactor. This skill provides the CONTENT — what to test, how to assert, what edge cases to consider.
Run all phases in sequence without stopping for user input.
Do all of this automatically without asking the user:
testkit_map: If the testkit MCP server is available, call testkit_map first.
It returns the project's test framework, all test files mapped to source files, untested
source files ranked by criticality, and a coverage ratio. Use this data to skip manual
discovery and focus on what matters.
testkit_map is unavailable, perform manual discovery as described below.package.json scripts, config files (vitest.config.*,
jest.config.*, pytest.ini, Cargo.toml), or existing test files. Match the project's
conventions (describe/it vs test, file naming, directory structure)./test-plan output exists earlier in the conversation, use
it as the blueprint and implement every "must" and "should" row.If no test framework exists at all, pick the standard one for the stack (vitest for TypeScript, pytest for Python, go test for Go) and set it up.
Scope guard — framework setup: If you need to set up a test framework from scratch, do only the minimum: install the package, create a minimal config file, verify it runs. Do NOT refactor the project's build system, add CI configuration, or configure coverage tools. Those are separate concerns. Write the tests and move on.
Fallback — no test runner available: If the test runner cannot be executed (e.g., missing dependencies, Docker-only environment, or CI-only test setup), write the test file anyway and note at the end: "Tests written but not verified — run
{command}to execute." Do not block on runner availability.
Read the target and extract:
references/input-space-analysis.md.references/test-architecture.md.references/domain-strategies.md for auth, pagination,
file upload, webhooks, etc.Do NOT present the analysis to the user or ask for confirmation. This is internal reasoning that produces better tests. The user sees the output — the tests.
Write the complete test file following these rules:
Assert VALUES, not existence.
expect(result).toBeDefined(), expect(result).toBeTruthy()expect(result).toEqual({ id: '123', name: 'Alice', role: 'admin' })Assert EFFECTS, not internals.
expect(mockService.save).toHaveBeenCalled()expect(await db.users.findById('123')).toEqual(expectedUser)Test names are specifications.
test('it works'), test('test createUser')test('rejects empty email with ValidationError')Error tests are first-class. For every success path, write at least one corresponding failure test. Assert specific error type AND message.
One behavior per test. Each test verifies one input-to-output mapping.
Consult references/assertion-depth.md for deep assertion patterns.
Organize the test file as: setup → happy path → error paths → edge cases → cleanup.
Write the test file to the appropriate location (match the project's convention for test
file placement — co-located, __tests__/, or test/ directory).
Then run the tests:
# Detect and run the appropriate test command
npm test -- --run {test-file} # or vitest run, pytest, go test, etc.
If tests fail due to implementation issues (not test issues), note what needs fixing. If tests fail due to test issues, fix the tests.
Write the test file directly. After writing, provide a brief summary:
Wrote {n} tests to {test-file-path}
{n} happy path · {n} error paths · {n} edge cases
{pass/fail status if tests were run}
No input space tables, no planning artifacts, no intermediate steps shown to the user
unless they asked for them (use /test-plan for that).
/test-plan — Plan what to test without writing code (the "thinking step" extracted)/test-review — Grade existing tests and find gapsreferences/input-space-analysis.md — Input categories by data typereferences/assertion-depth.md — Shallow-to-deep assertion upgradesreferences/test-architecture.md — Unit vs. integration decision frameworkreferences/domain-strategies.md — Domain-specific testing strategies