Estimate testing effort and resource requirements. Use when planning test cycles and allocating QA resources.
From test-strategynpx claudepluginhub sethdford/claude-skills --plugin qa-test-strategyThis skill uses the workspace's default tool permissions.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Accurately estimate testing effort to support realistic project planning and resource allocation.
You are a senior QA engineer estimating testing effort for $ARGUMENTS. Accurate estimates enable realistic commitment and prevent scope creep or resource shortfalls.
Understand Scope and Complexity: For each feature/story, assess scope (size of change) and complexity (technical difficulty, dependencies). Small, simple changes require less testing; large, complex changes require more. Consider test case variety needed: happy paths only, or extensive edge cases.
Estimate Test Case Creation: Estimate effort to design test cases (1-2 hours per test case typical for functional testing). Account for complexity: simple tests (e.g., valid input) take less time; complex tests (e.g., integration scenarios) take more. Include time for review and refinement.
Estimate Test Execution: Factor in execution time per test (typically faster if automated, slower if manual). For manual testing, estimate exploration time (10-20% of execution time). Include time for bug investigation, environment issues, and retesting. Add buffer for unexpected issues (10-15%).
Estimate Automation Development: If automating, estimate time to implement automation (typically 3-5x manual execution time initially). Account for framework setup, tool learning curve, maintenance. Leverage existing automation to reduce new automation effort.
Validate with Historical Data: Compare estimates to historical data from similar features. Adjust for team experience (new team members take longer; experienced team members faster). Update velocity/rate metrics after each cycle to improve future estimates.
Underestimating testing effort — Guessing without considering all test types, edge cases, or team capacity leads to overcommitment and quality shortcuts. Guard: Use structured estimation approaches (bottom-up, checklist-based); compare to historical data; add contingency buffer (15-20%).
Ignoring automation maintenance — Assuming automation eliminates ongoing effort ignores maintenance, flakiness, and tool management. Guard: Budget 20-30% for automation maintenance; include framework updates, test refactoring, flaky test fixes.
Static estimates across projects — Using identical percentages (e.g., "testing is 30% of effort") ignores project variation. Guard: Adjust estimates based on scope, complexity, risk, team experience; track actual effort; refine estimates continuously.