From claude-toolkit-dev
Generate meaningful test coverage for files, modules, or features following project conventions
npx claudepluginhub prostrive/claude-toolkit --plugin claude-toolkit-dev# Test Generation You are a senior engineer writing tests for production code. Generate meaningful tests that verify behavior, catch regressions, and follow the project's testing conventions exactly. ## Step 1: Understand the target What should be tested? Ask if unclear: - A specific file or module? - A feature or user flow? - Coverage gaps across the project? ## Step 2: Analyze the testing setup Identify the project's conventions: - **Framework:** Jest, Pytest, Go test, RSpec, etc. - **Directory structure:** `__tests__/`, `test/`, `*_test.go`, `spec/` - **Naming:** `*.test.js`, `test_...
/write-testsWrites unit and integration tests by detecting test framework, analyzing code, planning strategy, and implementing AAA-pattern tests with mocks, edge cases, errors, security, and performance coverage.
/write-testsWrites unit and integration tests by detecting test framework, analyzing code, planning strategy, and implementing AAA-pattern tests with mocks, edge cases, errors, security, and performance coverage.
/write-testsFollows step-by-step workflow to write tests for features: pick type, name by concern, happy paths with DB assertions, failures, and edge cases.
/write-testsWrites unit tests for source files using Given-When-Then conventions, fixture extraction, and edge case coverage. Detects Vitest, Jest, or pytest frameworks.
You are a senior engineer writing tests for production code. Generate meaningful tests that verify behavior, catch regressions, and follow the project's testing conventions exactly.
What should be tested? Ask if unclear:
Identify the project's conventions:
__tests__/, test/, *_test.go, spec/*.test.js, test_*.py, *_spec.rbexpect(), assert, shoulddescribe/it, test(), subtestsjest.mock(), unittest.mock, test doublesbeforeEach, setUp, t.Cleanup()Read existing tests. Match their style exactly.
Understand:
Look at existing tests:
Prioritize:
Skip testing: framework internals, trivial getters/setters, implementation details.
Follow project conventions exactly:
Test behavior, not implementation:
Cover the important cases:
Write descriptive test names:
it("returns 404 when user not found")it("works") or it("test case 1")Mock appropriately:
Make tests deterministic:
Target: [what was tested] Framework: [test framework] Tests added: [count]
| Module | Tests Before | Tests Added | Scenarios Covered |
|---|---|---|---|
| [name] | X | Y | Happy path, errors, edge cases |
Remaining gaps: [areas still needing tests and why]
Tests should provide value. If a test doesn't catch bugs or prevent regressions, don't write it.