From routine
Verifies branch changes via autoresearch loop: Codex CLI code review against main, visual QA web testing on target URLs. Iterates autonomously to fix issues until resolved.
npx claudepluginhub delexw/claude-code-miscThis skill is limited to using the following tools:
Verify implementation changes using the autoresearch loop pattern: review → fix → verify → keep/discard → repeat.
Verifies code changes by running the app and observing runtime behavior at CLI, API, GUI, or other surfaces. Use for PR validation, fix confirmation, manual testing, or local change checks.
Runs parallel specialized agents to verify implementations, run tests (unit/e2e/integration/perf/LLM), grade quality (0-10 scale), and suggest improvements. Use before merging.
Runs iterative self-review loop after code implementation: executes project quality checks (lint/test/typecheck), reviews diffs for dead code/naming/weak tests/failure modes, fixes until clean. UI visual verification included.
Share bugs, ideas, or general feedback.
Verify implementation changes using the autoresearch loop pattern: review → fix → verify → keep/discard → repeat.
Raw arguments: $ARGUMENTS
Infer from the arguments:
If TARGET_URLS are provided, use them directly as the QA test targets. If no URLs are provided, QA web test will be skipped.
Invoke Skill("autoresearch") with the verification process as context — let autoresearch decide the goal, metric, guard, and loop configuration:
Verify implementation changes on the current branch.
Each iteration MUST run these skills in order:
1. Skill("codex-review", "review the uncommitted changes against main branch") — use OpenAI Codex CLI (always run)
2. Skill("qa-web-test", "{target_urls}") — visual QA web testing (only if dev server/URLs found). Pass ALL target URLs so each affected page is tested.
Fix all issues found, then loop until verification pass clean.
Output a JSON summary:
{
"status": "passed | fixed | skipped",
"summary": "<build a report using the Summary Template below, then write a short funny poem reflecting what was reviewed and what was fixed — return the poem here as the summary>",
"screenshots": ["path/to/screenshot1.png", "path/to/screenshot2.png"]
}
The screenshots array must contain absolute paths to all screenshots captured during QA web testing. If no screenshots were taken, return an empty array [].
Use this structure for the summary field. Omit sections that don't apply.
## Verification Report
**Status:** {✅ passed | 🔧 fixed | ⏭️ skipped} | **Iterations:** {n}
### Code Review
| Area | Finding | Severity | Resolution |
|------|---------|----------|------------|
| {area} | {description} | Critical/Warning/Info | Fixed/Acknowledged/N/A |
### Visual QA
| Viewport | Page/Component | Result | Notes |
|----------|---------------|--------|-------|
| {width}px | {target} | Pass/Fail/Fixed | {details} |
### Fixes Applied
- **{file_path}**: {what changed} — {why}
### Screenshots
- `{path}` — {viewport}px {page/component}
Guidelines: Be specific — reference file paths, line numbers, component names. Group findings by area (accessibility, layout, logic, types). Show before/after for non-trivial fixes.