From prism-devtools
Use to create failing tests before implementing features (TDD approach). Ensures test-first development workflow.
npx claudepluginhub resolve-io/.prismThis skill uses the workspace's default tool permissions.
<!-- Powered by PRISMâ„¢ Core -->
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Provides patterns for autonomous Claude Code loops: sequential pipelines, agentic REPLs, PR cycles, de-sloppify cleanups, and RFC-driven multi-agent DAGs. For continuous dev workflows without intervention.
Applies NestJS patterns for modules, controllers, providers, DTO validation, guards, interceptors, config, and production TypeScript backends with project structure and bootstrap examples.
Document a reproducible failing test case that demonstrates the customer issue, providing Dev and QA teams with exact steps and assertions to verify the bug and confirm the fix.
Support agents validate issues but CANNOT write test code. This task creates a detailed test specification with exact reproduction steps that:
validation_data:
issue_id: "{ticket_number}"
reproduction_evidence:
- Playwright validation results
- Screenshots showing issue
- Console errors captured
- Network failures logged
- Exact user steps taken
environment:
- Browser and version
- User role/permissions
- Data state required
- Configuration settings
test_setup:
data_requirements:
- User account type needed
- Specific data records required
- System configuration state
- External service dependencies
initial_state:
- URL to start from
- Login credentials needed
- Features enabled/disabled
- Database seed data
reproduction_steps:
- step: 1
action: "Navigate to {specific URL}"
expected: "Page loads successfully"
actual: "Page loads successfully"
- step: 2
action: "Click on {specific element}"
element_selector: "#button-id or .class-name"
expected: "Modal opens"
actual: "Modal opens"
- step: 3
action: "Enter '{value}' in {field}"
element_selector: "input[name='fieldname']"
expected: "Field accepts input"
actual: "Field accepts input"
- step: 4
action: "Submit form"
element_selector: "button[type='submit']"
expected: "Success message appears"
actual: "ERROR: {exact error message}"
failure_point:
step_number: 4
expected_behavior: "Form submits and shows success"
actual_behavior: "Form fails with error: {message}"
error_details:
- Console error text
- Network response code
- Stack trace if available
test_assertions:
should_pass_but_fails:
- description: "Form submission should succeed"
assertion: "expect(successMessage).toBeVisible()"
current_result: "FAILS - Error message appears instead"
- description: "API should return 200"
assertion: "expect(response.status).toBe(200)"
current_result: "FAILS - Returns 500"
- description: "Data should be saved"
assertion: "expect(database.record).toExist()"
current_result: "FAILS - No record created"
regression_prevention:
- "Test should fail before fix applied"
- "Test should pass after fix applied"
- "Test should remain in suite permanently"
Generate failing-test-{issue_id}.md:
# Failing Test: {Issue Title}
## Issue ID: {ISSUE-XXX}
## Priority: {P0|P1|P2|P3}
## Status: Reproduces Consistently
## Test Objective
Verify that {specific functionality} works correctly when {conditions}.
Currently FAILS due to {brief description of bug}.
## Preconditions
- Environment: {staging|production}
- User: {role/permissions}
- Data: {required records}
- Configuration: {settings}
## Steps to Reproduce
1. {Detailed step with selectors}
2. {Detailed step with selectors}
3. {Detailed step with selectors}
4. **FAILURE POINT:** {Where it breaks}
## Expected vs Actual
- **Expected:** {What should happen}
- **Actual:** {What actually happens}
- **Error:** {Exact error message/behavior}
## Test Assertions
```javascript
// Pseudocode for QA implementation
test('Customer Issue {ID}: {Title}', () => {
// Setup
{setup_steps}
// Action
{action_steps}
// This assertion currently FAILS
expect({element}).{assertion}
// After fix, this should PASS
});
### 6. Handoff Package
```yaml
deliverables:
for_dev_team:
- Exact reproduction steps
- Failing assertions to verify
- Environment requirements
- Expected behavior after fix
for_qa_team:
- Test specification document
- Selectors and assertions
- Test data requirements
- Regression suite placement
tracking:
- Link test to issue ticket
- Add to test coverage matrix
- Update test documentation