From qa-testing-skills
Systematic exploratory QA testing of web applications — find bugs, capture evidence, and generate structured reports
npx claudepluginhub rnben/hermes-skills --plugin qa-testing-skillsThis skill uses the workspace's default tool permissions.
This skill guides you through systematic exploratory QA testing of web applications using the browser toolset. You will navigate the application, interact with elements, capture evidence of issues, and produce a structured bug report.
Guides strict Test-Driven Development (TDD): write failing tests first for features, bugfixes, refactors before any production code. Enforces red-green-refactor cycle.
Guides systematic root cause investigation for bugs, test failures, unexpected behavior, performance issues, and build failures before proposing fixes.
Guides A/B test setup with mandatory gates for hypothesis validation, metrics definition, sample size calculation, and execution readiness checks.
This skill guides you through systematic exploratory QA testing of web applications using the browser toolset. You will navigate the application, interact with elements, capture evidence of issues, and produce a structured bug report.
browser_navigate, browser_snapshot, browser_click, browser_type, browser_vision, browser_console, browser_scroll, browser_back, browser_press)The user provides:
./dogfood-output)Follow this 5-phase systematic workflow:
{output_dir}/
├── screenshots/ # Evidence screenshots
└── report.md # Final report (generated in Phase 5)
For each page or feature in your plan:
Navigate to the page:
browser_navigate(url="https://example.com/page")
Take a snapshot to understand the DOM structure:
browser_snapshot()
Check the console for JavaScript errors:
browser_console(clear=true)
Do this after every navigation and after every significant interaction. Silent JS errors are high-value findings.
Take an annotated screenshot to visually assess the page and identify interactive elements:
browser_vision(question="Describe the page layout, identify any visual issues, broken elements, or accessibility concerns", annotate=true)
The annotate=true flag overlays numbered [N] labels on interactive elements. Each [N] maps to ref @eN for subsequent browser commands.
Test interactive elements systematically:
browser_click(ref="@eN")browser_type(ref="@eN", text="test input")browser_press(key="Tab"), browser_press(key="Enter")browser_scroll(direction="down")After each interaction, check for:
browser_console()browser_vision(question="What changed after the interaction?")For every issue found:
Take a screenshot showing the issue:
browser_vision(question="Capture and describe the issue visible on this page", annotate=false)
Save the screenshot_path from the response — you will reference it in the report.
Record the details:
Classify the issue using the issue taxonomy (see references/issue-taxonomy.md):
Generate the final report using the template at templates/dogfood-report-template.md.
The report must include:
MEDIA:<screenshot_path> for inline images)Save the report to {output_dir}/report.md.
| Tool | Purpose |
|---|---|
browser_navigate | Go to a URL |
browser_snapshot | Get DOM text snapshot (accessibility tree) |
browser_click | Click an element by ref (@eN) or text |
browser_type | Type into an input field |
browser_scroll | Scroll up/down on the page |
browser_back | Go back in browser history |
browser_press | Press a keyboard key |
browser_vision | Screenshot + AI analysis; use annotate=true for element labels |
browser_console | Get JS console output and errors |
browser_console() after navigating and after significant interactions. Silent JS errors are among the most valuable findings.annotate=true with browser_vision when you need to reason about interactive element positions or when the snapshot refs are unclear.MEDIA:<screenshot_path> so they can see the evidence inline.