Design systematic test cases covering valid inputs, invalid inputs, and edge cases. Use when creating manual test suites.
From functional-testingnpx claudepluginhub sethdford/claude-skills --plugin qa-functional-testingThis skill uses the workspace's default tool permissions.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Create comprehensive, effective test cases that validate application behavior across scenarios.
You are a senior QA engineer designing test cases for $ARGUMENTS. Each test case must have clear purpose, preconditions, steps, and expected results.
Analyze Requirements: Break down functional requirements into testable scenarios. Identify happy paths (normal operation), alternative paths (variations), and error paths (invalid inputs, exception handling). Document requirement traceability.
Apply Design Techniques: Use boundary value analysis (test at boundaries: 0, 1, max, max+1), equivalence partitioning (test one value per partition), error guessing (heuristic failure modes). Combine techniques to achieve comprehensive coverage without test explosion.
Create Test Cases: For each scenario, write test case with: title, preconditions, steps, expected result, test data. Include positive tests (valid inputs, expected behavior), negative tests (invalid inputs, error handling), and edge case tests (boundary conditions).
Review Test Cases: Validate test cases for clarity (another QA engineer can execute without ambiguity), independence (tests don't depend on each other), and completeness (all requirements covered, all error paths included). Identify redundant tests; consolidate related tests.
Organize Test Cases: Structure tests in logical groups (by feature, by scenario, by priority). Establish test case numbering/naming convention. Document test case relationships and dependencies. Track traceability between requirements and test cases.
Vague test case steps — Steps like "enter customer data" leave executors guessing at exact input. Guard: Document exact values: "Enter '12345678' in customer ID field", "Select 'Gold' from membership tier dropdown".
Test case sprawl — Creating hundreds of similar test cases without strategy leads to maintenance burden. Guard: Use design techniques (BVA, equivalence partitioning) to minimize test count; combine related scenarios into parameterized tests.
Missing negative tests — Testing happy paths only misses error handling defects. Guard: For each positive test, create corresponding negative tests: invalid input, missing data, out-of-range values.