From qa-engineer
Generate test cases and test code for a function, component, endpoint, or feature.
npx claudepluginhub hpsgd/turtlestack --plugin qa-engineerThis skill is limited to using the following tools:
Generate tests for $ARGUMENTS.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Designs, implements, and audits WCAG 2.2 AA accessible UIs for Web (ARIA/HTML5), iOS (SwiftUI traits), and Android (Compose semantics). Audits code for compliance gaps.
Generate tests for $ARGUMENTS.
No production code without a failing test first. This is not a suggestion. This is the process:
RED-GREEN per feature slice, not all RED then all GREEN:
Each slice must be a complete behaviour. Never leave more than one test failing at a time.
tests/ directory)For every function/component/endpoint, identify these categories:
Happy path (MUST have):
Edge cases (MUST have):
Error cases (MUST have):
State transitions (if applicable):
Follow this structure for every test:
Arrange — set up preconditions and inputs
Act — invoke the behaviour under test (ONE action)
Assert — verify ONE expected outcome
Run tests using the correct runner in run mode (never watch mode):
# TypeScript/Vitest
CI=true npx vitest run path/to/file.test.ts
# TypeScript/Jest
CI=true npx jest path/to/file.test.ts
# .NET/xUnit
dotnet test --filter "FullyQualifiedName~ClassName"
# Python/pytest
pytest tests/path/to/test_file.py -v
After every test run, verify no orphaned processes:
pgrep -f "vitest|jest" || echo "Clean"
import { describe, it, expect, vi, beforeEach } from 'vitest';
// Hoist mocks BEFORE imports
const mockDependency = vi.hoisted(() => ({
doThing: vi.fn(),
}));
vi.mock('./dependency', () => ({ dependency: mockDependency }));
describe('MyFunction', () => {
beforeEach(() => {
vi.clearAllMocks();
});
it('returns null when user is not found', async () => {
// Arrange
mockDependency.doThing.mockResolvedValue(null);
// Act
const result = await myFunction('nonexistent-id');
// Assert
expect(result).toBeNull();
});
});
Rules:
vi.hoisted() + vi.mock() for module mocks — never inline factory without hoistingjsdom environment — test logic, not DOMmy-function.test.ts next to my-function.tsvi.fn() for function mocks, vi.spyOn() for partial mocksbeforeEach(() => vi.clearAllMocks()) to prevent state leakagepublic class WhenCreatingANewSource
{
[Fact]
public async Task it_returns_the_created_source()
{
// Arrange
var command = SourceFactory.CreateCommand();
var handler = new CreateSourceHandler(
Substitute.For<IDocumentSession>());
// Act
var result = handler.Handle(command);
// Assert
result.ShouldNotBeNull();
result.Name.ShouldBe(command.Name);
}
}
Rules:
WhenDoingSomething, GivenSomeStateSubstitute.For<T>())ShouldBe, ShouldNotBeNull, ShouldThrow)SourceFactory.Create())# tests/features/user_login.feature
Feature: User login
Scenario: Successful login with valid credentials
Given a registered user with email "test@example.com"
When the user logs in with correct password
Then the response contains a valid access token
# tests/step_defs/test_user_login.py
from pytest_bdd import given, when, then, scenarios
scenarios('../features/user_login.feature')
@given('a registered user with email "test@example.com"')
def registered_user(db_session):
return UserFactory.create(email="test@example.com")
@when('the user logs in with correct password')
def login(client, registered_user):
return client.post('/login', json={...})
@then('the response contains a valid access token')
def verify_token(login):
assert login.status_code == 200
assert 'access_token' in login.json()
Rules:
.feature files — no infrastructure detailsconftest.py@pytest.fixture for test data factories and shared setupsleep(1000) is a race conditionEvery test generation output MUST include:
### Evidence
| Test | Command | Exit | Result |
|---|---|---|---|
| [test name] | [exact command] | [0/1] | [PASS/FAIL: message] |
### Coverage Summary
- Happy path scenarios: [count] tested
- Edge cases: [count] tested
- Error cases: [count] tested
- Total tests: [count] passing, [count] failing, [count] skipped
"Tests pass" without an exit code is not evidence. Every claim requires the exact command and its exit code.
Deliver:
/qa-engineer:write-bug-report — when tests reveal defects, write a structured bug report./qa-lead:test-strategy — for defining the overall test strategy before generating individual tests.