From prism-devtools
Use to create QA task documents. Generates test specifications and validation requirements for QA team.
npx claudepluginhub resolve-io/.prismThis skill uses the workspace's default tool permissions.
<!-- Powered by Prism Coreâ„¢ -->
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Provides patterns for autonomous Claude Code loops: sequential pipelines, agentic REPLs, PR cycles, de-sloppify cleanups, and RFC-driven multi-agent DAGs. For continuous dev workflows without intervention.
Applies NestJS patterns for modules, controllers, providers, DTO validation, guards, interceptors, config, and production TypeScript backends with project structure and bootstrap examples.
Generate a test specification document for the QA agent to implement based on validated customer issue.
Create a comprehensive test specification that describes WHAT to test, not HOW to implement it. The QA agent will use this specification to write actual test code.
inputs:
- Issue validation report from Playwright
- Screenshots and error evidence
- Reproduction steps documented
- Root cause investigation findings
test_specification:
ticket_id: "ISSUE-{number}"
priority: "P0|P1|P2|P3"
test_objective: |
Ensure that {specific functionality} works correctly
when {user action} under {conditions}.
test_scope:
in_scope:
- {Feature/component to test}
- {User workflows affected}
- {Error conditions to verify}
out_of_scope:
- {What NOT to test}
- {Unrelated features}
test_scenarios:
- scenario_1:
name: "Happy path - {description}"
given: "{Initial state}"
when: "{User action}"
then: "{Expected result}"
- scenario_2:
name: "Error case - {description}"
given: "{Initial state with issue}"
when: "{Action that triggers issue}"
then: "{Issue should not occur}"
- scenario_3:
name: "Edge case - {description}"
given: "{Boundary condition}"
when: "{Edge case action}"
then: "{Graceful handling}"
test_requirements:
test_type: "E2E Integration"
environment:
- Test database with sample data
- Authentication enabled
- External services mocked/containerized
test_data:
- User accounts with various roles
- Sample transactions/records
- Edge case data sets
assertions_needed:
- Response status codes
- Data integrity checks
- UI state validations
- Performance thresholds
coverage_goals:
- All reported issue scenarios
- Related functionality
- Regression prevention
# QA Task: Test Implementation for Issue #{ticket}
## Issue Summary
**Customer Report:** {brief description}
**Validation Status:** Reproduced via Playwright
**Root Cause:** {identified cause from investigation}
## Test Implementation Request
### Priority: {P0|P1|P2|P3}
### Due Date: {sprint deadline}
### Test Scenarios to Implement
1. **Regression Test for Original Issue**
- Must reproduce the exact customer scenario
- Should FAIL with current code
- Should PASS after Dev fix applied
2. **Related Test Coverage**
- Test variations of the issue
- Test error boundaries
- Test data edge cases
### Test Location Recommendation
- Suggested file: `Integration.Tests/CustomerIssues/Issue{number}Tests.cs`
- Test collection: `{appropriate collection}`
- Use existing fixtures where possible
### Evidence to Include in Tests
- Capture screenshots at failure points
- Log console errors
- Record response times
- Validate data state
### Acceptance Criteria
- [ ] Test reproduces customer issue (fails before fix)
- [ ] Test passes after Dev implements fix
- [ ] No flaky test behaviors
- [ ] Clear test documentation
- [ ] Follows project test conventions
## Handoff Notes
- Validation evidence attached: {screenshots/logs}
- Related issues: {linked tickets}
- Contact Support team for clarification if needed
qa_task_metadata:
assigned_to: "QA Agent"
created_by: "Support Agent"
created_date: "{current_date}"
dependencies:
- "Dev task for fix implementation"
- "Environment setup if needed"
deliverables:
- "Failing E2E integration test"
- "Test documentation"
- "Coverage report"
success_metrics:
- "Test catches the reported issue"
- "Test maintainable and clear"
- "No false positives"