Generates test plans and implements tests with AAA pattern for TDD workflows or code coverage needs. Uses vitest mocks, verifies coverage, and reports results.
From oacnpx claudepluginhub darrenhinde/openagentscontrol --plugin oacThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides idea refinement into designs: explores context, asks questions one-by-one, proposes approaches, presents sections for approval, writes/review specs before coding.
Generate comprehensive tests following TDD principles and project testing standards. Runs in isolated test-engineer context with pre-loaded testing conventions.
Announce at start: "I'm using the test-generation skill to create tests for [feature/component]."
Provide clear requirements to test-engineer:
/test-generation
Feature: JWT token validation middleware
Behaviors:
1. Valid token → allow request
2. Expired token → reject with 401
3. Invalid signature → reject with 401
4. Missing token → reject with 401
Coverage: All critical paths
Testing Standards (pre-loaded):
- Framework: vitest
- Mocks: vi.mock()
- Structure: AAA pattern
Test-engineer proposes a test plan:
## Test Plan
Behaviors:
1. Valid token
- ✅ Positive: Allow request with valid JWT
- ❌ Negative: Reject malformed JWT
2. Expired token
- ✅ Positive: Normal token works
- ❌ Negative: Expired token rejected with 401
Mocking Strategy:
- JWT verification: Mock with vi.mock()
- Request/Response: Use test doubles
Coverage Target: 95% line coverage, all critical paths
IMPORTANT: Review and approve before implementation proceeds.
Test-engineer implements following AAA pattern (Arrange-Act-Assert):
describe('JWT Middleware', () => {
it('allows request with valid token', () => {
// Arrange
const req = mockRequest({ headers: { authorization: 'Bearer valid.jwt.token' } });
// Act
const result = jwtMiddleware(req);
// Assert
expect(result.authorized).toBe(true);
});
});
Test-engineer verifies:
Execute test suite and verify all pass:
npm test -- jwt.middleware.test.ts
status: success
tests_written: 8
coverage:
lines: 96%
branches: 93%
functions: 100%
behaviors_tested:
- name: "Valid token handling"
positive_tests: 2
negative_tests: 2
test_results:
passed: 8
failed: 0
deliverables:
- "src/auth/jwt.middleware.test.ts"
For test-driven development, invoke BEFORE implementation:
/test-generation
Write tests for user registration endpoint (not yet implemented):
Expected Behavior:
- POST /api/register with valid data → 201 + user object
- POST /api/register with duplicate email → 409 error
- POST /api/register with invalid email → 400 error
Note: Implementation does not exist. Write tests that define expected behavior.
Tests will fail initially—use them as spec to guide implementation.
Tests don't match project conventions:
.opencode/context/core/standards/tests.mdMissing edge cases:
Flaky tests:
If you think any of these, STOP and re-read this skill:
| Excuse | Reality |
|---|---|
| "It's too simple to break" | Simple code breaks in simple ways. Tests document the contract, not just catch bugs. |
| "Negative tests are obvious failures, not worth writing" | Negative tests are where bugs hide. "Obviously fails" is not the same as "correctly fails". |
| "Mocking this dependency is too hard" | Hard-to-mock dependencies are a design smell. Mock them anyway and note the smell. |
| "Tests slow down delivery" | Tests without negative cases give false confidence. False confidence slows delivery more. |