Write and review tests following these non-negotiable principles:
/plugin marketplace add iamladi/cautious-computing-machine--sdlc-plugin/plugin install sdlc@cautious-computing-machineThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Write and review tests following these non-negotiable principles:
using keyword for automatic cleanupParse $ARGUMENTS to determine the mode:
Review mode: First arg is review
/test review src/__tests__/ - Review tests in directory/test review src/utils.test.ts - Review specific test fileConvert mode: First arg is convert
/test convert old.test.ts - Convert nested tests to flatWrite mode (default): Path to source file
/test src/utils/parser.ts - Write tests for file/test with no args - Ask what to testBefore generating tests, detect the framework:
package.json in project rootvitest → Use Vitest patternsbun with "test" script → Use Bun test patternsjest → Recommend Vitest migration, then use Vitest patterns// Vitest imports
import { describe, test, expect, vi, beforeAll, afterAll, afterEach } from 'vitest'
// Bun test imports
import { describe, test, expect, mock, beforeAll, afterAll, afterEach } from 'bun:test'
When given a source file path:
__tests__/import { describe, test, expect, vi } from 'vitest'
import { functionName } from './module'
// ============================================================
// Setup Functions
// ============================================================
function setup(overrides?: Partial<SetupOptions>) {
const defaults = { /* sensible defaults */ }
const options = { ...defaults, ...overrides }
// Create mocks
const mockDependency = vi.fn()
// Create instance or prepare state
const instance = new Thing(options)
return {
instance,
mockDependency,
// Include everything tests might need
}
}
// Composed setup for common scenarios
function setupWithValidInput() {
const utils = setup()
utils.instance.configure({ valid: true })
return utils
}
// ============================================================
// Tests
// ============================================================
test('returns expected value for valid input', () => {
const { instance } = setup()
const result = instance.process('valid')
expect(result).toBe('expected')
})
test('throws error for invalid input', () => {
const { instance } = setup()
expect(() => instance.process('')).toThrow('Input required')
})
test('calls dependency with correct arguments', () => {
const { instance, mockDependency } = setup()
instance.doWork()
expect(mockDependency).toHaveBeenCalledWith('expected-arg')
})
When tests need external resources (servers, databases, files), use disposable patterns:
// ============================================================
// Disposable Fixtures
// ============================================================
function createTestServer() {
const app = createApp()
let server: Server | null = null
let url = ''
return {
app,
get url() { return url },
async start() {
server = app.listen(0)
const address = server.address() as { port: number }
url = `http://localhost:${address.port}`
},
async [Symbol.asyncDispose]() {
if (server) {
await new Promise<void>(resolve => server!.close(() => resolve()))
}
}
}
}
function createTestDatabase() {
const db = new TestDatabase()
return {
db,
async [Symbol.asyncDispose]() {
await db.close()
}
}
}
function createTempFile(content: string) {
const path = `/tmp/test-${Date.now()}.txt`
writeFileSync(path, content)
return {
path,
[Symbol.dispose]() {
unlinkSync(path)
}
}
}
// ============================================================
// Tests with Disposables
// ============================================================
test('fetches data from API', async () => {
await using server = createTestServer()
server.app.get('/data', () => ({ value: 42 }))
await server.start()
const response = await fetch(`${server.url}/data`)
const data = await response.json()
expect(data.value).toBe(42)
})
test('reads and processes file', () => {
using file = createTempFile('test content')
const result = processFile(file.path)
expect(result).toContain('processed')
})
Use descriptive names that explain behavior:
// GOOD - describes behavior
test('returns null when user not found', () => {})
test('throws ValidationError for empty email', () => {})
test('caches response for subsequent calls', () => {})
// BAD - describes implementation
test('calls findById', () => {})
test('checks email length', () => {})
test('uses Map for storage', () => {})
When first argument is review:
*.test.ts, *.spec.ts, *.test.tsx, *.spec.tsx// FLAG THIS ❌
describe('User', () => {
describe('when logged in', () => {
describe('with admin role', () => { // Too deep!
test('can delete', () => {})
})
})
})
// FIX ✅
test('logged-in admin user can delete', () => {
const { user } = setupAdminUser()
// ...
})
// FLAG THIS ❌
let user: User
let service: UserService
beforeEach(() => {
user = createUser() // Mutable shared state!
service = new UserService()
})
// FIX ✅
function setup() {
const user = createUser()
const service = new UserService()
return { user, service }
}
test('...', () => {
const { user, service } = setup()
})
// FLAG THIS ❌
test('starts server', async () => {
const server = await startServer()
// server never closed!
expect(server.isRunning).toBe(true)
})
// FIX ✅
test('starts server', async () => {
await using server = createTestServer()
await server.start()
expect(server.isRunning).toBe(true)
})
// FLAG THIS ❌
const testCRUD = (entity: string) => {
test(`creates ${entity}`, () => { /* ... */ })
test(`reads ${entity}`, () => { /* ... */ })
test(`updates ${entity}`, () => { /* ... */ })
test(`deletes ${entity}`, () => { /* ... */ })
}
testCRUD('user')
testCRUD('post')
// FIX ✅
// Write explicit tests - duplication is fine
test('creates user', () => {
const { userService } = setup()
const user = userService.create({ name: 'Test' })
expect(user.id).toBeDefined()
})
// FLAG THIS ❌
const testData = { count: 0 }
test('increments count', () => {
testData.count++
expect(testData.count).toBe(1)
})
test('uses count', () => {
expect(testData.count).toBe(0) // FAILS - state leaked!
})
## Test Review: path/to/tests
### Summary
- Files analyzed: X
- Issues found: Y
- Severity: High/Medium/Low
### Issues
#### 1. Nested describes in `user.test.ts:15-45`
**Severity**: High
**Pattern**: 3 levels of nesting
```typescript
// Current (lines 15-45)
describe('User', () => {
describe('authentication', () => {
describe('with valid credentials', () => {
Fix: Flatten to single level with descriptive test names
test('authenticates user with valid credentials', () => {
api.test.ts:8-12Severity: High Pattern: Mutable shared state
// Current
let client: ApiClient
beforeEach(() => {
client = new ApiClient()
})
Fix: Use setup function
function setup() {
return { client: new ApiClient() }
}
---
## Convert Mode
When first argument is `convert`:
### Process
1. **Read the test file** completely
2. **Parse the structure**:
- Identify all describe blocks and their nesting
- Find all beforeEach/afterEach hooks
- Map variable declarations to their usage
3. **Transform**:
- Flatten nested describes
- Convert beforeEach to setup functions
- Add disposable patterns for resources
4. **Write the converted file** (or show diff)
### Transformation Rules
#### Rule 1: Flatten Describes
```typescript
// Before
describe('Calculator', () => {
describe('add', () => {
describe('with positive numbers', () => {
test('returns sum', () => {})
})
})
})
// After
test('Calculator.add returns sum for positive numbers', () => {})
// Before
describe('UserService', () => {
let service: UserService
let mockDb: MockDatabase
beforeEach(() => {
mockDb = new MockDatabase()
service = new UserService(mockDb)
})
test('creates user', () => {
service.create({ name: 'Test' })
expect(mockDb.users).toHaveLength(1)
})
})
// After
function setup() {
const mockDb = new MockDatabase()
const service = new UserService(mockDb)
return { service, mockDb }
}
test('UserService creates user', () => {
const { service, mockDb } = setup()
service.create({ name: 'Test' })
expect(mockDb.users).toHaveLength(1)
})
// Before
describe('API', () => {
let server: Server
beforeAll(async () => {
server = await startServer()
})
afterAll(async () => {
await server.close()
})
test('responds to GET', async () => {
const res = await fetch(`${server.url}/health`)
expect(res.ok).toBe(true)
})
})
// After
function createTestServer() {
const server = new Server()
return {
server,
get url() { return server.url },
async start() { await server.listen() },
async [Symbol.asyncDispose]() { await server.close() }
}
}
test('API responds to GET /health', async () => {
await using { url } = createTestServer()
const res = await fetch(`${url}/health`)
expect(res.ok).toBe(true)
})
These patterns are acceptable and should NOT be flagged:
// Global mocking (console, timers, etc.)
beforeAll(() => {
vi.spyOn(console, 'error').mockImplementation(() => {})
})
afterEach(() => {
vi.mocked(console.error).mockClear()
})
afterAll(() => {
vi.mocked(console.error).mockRestore()
})
// React Testing Library cleanup
afterEach(() => {
cleanup()
})
// Shared expensive setup (when truly necessary)
let expensiveResource: Resource
beforeAll(async () => {
expensiveResource = await createExpensiveResource()
})
afterAll(async () => {
await expensiveResource.dispose()
})
$ARGUMENTS
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.