QA E2E Validation Workflow
Quick Start
Task Progress:
- [ ] Step 1: Read the issue and relevant specs
- [ ] Step 2: Produce a validation plan
- [ ] Step 3: Execute all test cases
- [ ] Step 4: Produce the validation report
- [ ] Step 5: File follow-up issues
Step 1: Read Inputs
- Parse the issue body: validation scope, test matrix, environments, preconditions, pass/fail criteria, evidence requirements.
- Read project user flows documentation for expected behavior.
- Read project quality documentation for DoD, testing pyramid, performance budgets.
- Read project permissions/privacy and security threat model for security test cases.
- Confirm the correct version is deployed to the test environment.
- For external library docs and current best practices, follow the project's tooling hierarchy.
Step 2: Validation Plan
Before executing, output:
- Scope: feature/release being validated
- Environment: where tests will run
- Version: build/commit being tested
- Preconditions verified: checklist
- Test execution order: sequence with dependencies
- Estimated duration: time estimate
Step 3: Execute Tests
3a. Automated Test Execution
Run the project's automated test suites (unit, integration, E2E) and record results.
3b. Browser-Based Validation
For each user-facing test case in the matrix:
- Confirm the dev server is running by checking the expected port. If not running, start it in the background.
- Navigate to the page or surface under test using browser automation MCP.
- Execute the test steps exactly as described — click, type, navigate, trigger state changes.
- Observe the actual result and compare to the expected result.
- Capture a screenshot as evidence for each test case result.
- Check the browser console for errors or warnings after each test case.
- Mark as PASS, FAIL, or BLOCKED (with reason and screenshot).
For non-UI test cases (API, data integrity, background jobs), use appropriate non-browser verification methods.
Do NOT fix bugs during validation. Document and file issues.
Step 4: Validation Report
Produce a structured report with:
- Summary: total/passed/failed/blocked counts, overall result
- Results table: test case, priority, result, evidence, notes
- Regression results: checks for unaffected flows
- Security validation: invariant checks
- Performance validation: metric vs budget vs actual
- Issues found: severity, description, issue link
- Recommendation: SHIP or HOLD with reasons
Step 5: Follow-Up
- File new issues for bugs discovered during validation.
- If validation fails, state what must be fixed before re-validation.
- Post report as comment on the issue/work item or linked PR/MR (check
platform in .agents/hatch.json).
Error Handling
- Test environment unavailable or misconfigured: Document which tests could not be executed, note the environment gap, and recommend a fix. Do not mark untested scenarios as passing.
- Validation discovers a blocking defect: File an issue immediately, mark the validation as HOLD, and include the defect details in the validation report with reproduction steps.
- Flaky test results (pass on retry): Run the test 3 times. If it passes inconsistently, mark it as flaky in the report, file a tracking issue, and exclude it from the pass/fail determination.
Definition of Done