Explain the unit test loop technique
Explains the unit test loop technique for iterative test coverage improvement.
/plugin marketplace add SomtoUgeh/somto-dev-toolkit/plugin install somto-dev-toolkit@somto-dev-toolkitExplain the following to the user:
It creates an iterative loop where Claude:
The key insight: ONE test per iteration forces focused, reviewable commits and prevents test spam.
From Matt Pocock's approach:
A great test covers behavior users depend on. It tests a feature that, if broken, would frustrate or block users. It validates real workflows - not implementation details.
Do NOT write tests just to increase coverage numbers. Use coverage as a guide to find UNTESTED USER-FACING BEHAVIOR.
If uncovered code isn't worth testing (boilerplate, unreachable branches, internal plumbing), use /* v8 ignore */ comments instead.
/ut [OPTIONS]Start a coverage improvement loop.
Options:
--target N% - Target coverage percentage (exits when reached)--max-iterations N - Max iterations before auto-stop--test-command "cmd" - Override auto-detected coverage command--completion-promise "text" - Custom promise phrase (default: COVERAGE COMPLETE)Examples:
/ut --target 80% --max-iterations 20
/ut --test-command "bun test:coverage"
/ut --completion-promise "ALL TESTS PASS" --max-iterations 10
/cancel-utStop an active loop and remove the state file.
The loop automatically detects:
Coverage tools (in order):
vitest.config.* → vitest run --coveragejest.config.* → jest --coveragec8 in package.json → npx c8 <pm> testnyc in package.json → npx nyc <pm> testcoverage script → <pm> run coveragetest:coverage script → <pm> run test:coveragePackage managers (by lockfile):
pnpm-lock.yaml → pnpmbun.lockb → bunyarn.lock → yarntest(<file>): <describe behavior>.claude/ut-progress.txtThe loop stops when:
<promise>YOUR_PROMISE</promise> is output (default: COVERAGE COMPLETE)--max-iterations is reached/cancel-ut is run.claude/ut-loop.local.md - State file (iteration, config, prompt).claude/ut-progress.txt - Progress log (JSONL format)"The more your tests resemble the way your software is used, the more confidence they can give you."
Accessible to Everyone (prefer these):
| Query | Use Case |
|---|---|
getByRole | Top preference - use name option: getByRole('button', {name: /submit/i}) |
getByLabelText | Form fields - emulates how users find inputs |
getByPlaceholderText | Only if no label exists |
getByText | Non-interactive elements (div, span, p) |
getByDisplayValue | Filled-in form values |
Semantic Queries (less reliable):
| Query | Use Case |
|---|---|
getByAltText | img, area, input with alt |
getByTitle | Not consistently read by screenreaders |
Test IDs (last resort):
| Query | Use Case |
|---|---|
getByTestId | Only when role/text doesn't work |
| Type | No Match | 1 Match | 1+ Match | Async |
|---|---|---|---|---|
getBy | throw | return | throw | No |
queryBy | null | return | throw | No |
findBy | throw | return | throw | Yes |
Rules:
getBy/getAllBy - default choicequeryBy/queryAllBy - only for asserting absencefindBy/findAllBy - async elements (returns Promise)Setup pattern (recommended):
import userEvent from '@testing-library/user-event'
test('submits form', async () => {
const user = userEvent.setup()
render(<Form />)
await user.type(screen.getByLabelText(/email/i), 'test@example.com')
await user.click(screen.getByRole('button', {name: /submit/i}))
expect(await screen.findByText(/success/i)).toBeInTheDocument()
})
Available methods:
user.click(element) - clickuser.dblClick(element) - double clickuser.type(element, text) - type into input (clicks first)user.keyboard('{Enter}') - press keysuser.clear(element) - clear inputuser.selectOptions(select, ['value']) - select dropdownuser.upload(input, file) - file uploaduser.tab() - tab navigationuser.hover(element) / user.unhover(element)Keyboard special keys:
await user.keyboard('{Enter}') // press Enter
await user.keyboard('{Escape}') // press Escape
await user.keyboard('{Shift>}A{/Shift}') // Shift+A
await user.keyboard('[ShiftLeft>]') // hold Shift
Use screen object:
// Preferred
render(<Component />)
expect(screen.getByText(/hello/i)).toBeInTheDocument()
// Avoid (except for asFragment)
const { getByText } = render(<Component />)
Use jest-dom matchers:
// Preferred
expect(button).toBeDisabled()
expect(element).toHaveClass('active')
expect(input).toHaveValue('text')
// Avoid
expect(button.disabled).toBe(true)
Async testing:
// Use findBy for elements that appear async
const item = await screen.findByText(/loaded/i)
// Use waitFor for complex conditions
await waitFor(() =>
expect(screen.getByText(/success/i)).toBeInTheDocument()
)
Avoid manual act():
render and events in act() automaticallyjest.advanceTimersByTime() in act() with fake timersResolving "not wrapped in act" warnings (in order):
findBy querywaitForact() (last resort)Callback props:
const onClick = jest.fn()
render(<Button onClick={onClick} />)
await user.click(screen.getByRole('button'))
expect(onClick).toHaveBeenCalledTimes(1)
Child components:
jest.mock('./ChildComponent', () => () => <div>Mocked</div>)
External APIs (MSW):
const server = setupServer(
rest.get('/api/data', (req, res, ctx) => res(ctx.json({ data: 'mocked' })))
)
beforeAll(() => server.listen())
afterEach(() => server.resetHandlers())
afterAll(() => server.close())
describe() only for grouping tests with shared setuptest() outside describe(), it() insidetoHaveClass()