From formal-verify
Guide users step-by-step through manually testing whatever is currently being worked on. Use when asked to "test this", "verify it works", "let's test", "manual testing", "QA this", "check if it works", or after implementing a feature that needs verification before proceeding.
npx claudepluginhub petekp/agent-skills --plugin literate-guideThis skill uses the workspace's default tool permissions.
Verify current work through automated testing first, falling back to user verification only when necessary.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Verify current work through automated testing first, falling back to user verification only when necessary.
Automate everything possible. Only ask the user to manually verify what Claude cannot verify through tools.
Examine recent work to identify what needs testing:
For each thing to verify, determine if Claude can test it automatically:
Claude CAN verify (do these automatically):
npm run build, cargo build, go build, etc.npm test, pytest, cargo test, etc.eslint, tsc --noEmit, mypy, etc.curl, httpie, or scripted requestsls, stat, test -flsof, netstat, curl localhostClaude CANNOT verify (ask user):
Run all automatable checks first. Be thorough:
# Example: Testing a web feature
npm run build # Compiles?
npm run lint # No lint errors?
npm test # Tests pass?
npm run dev & # Server starts?
sleep 3
curl localhost:3000/api/endpoint # API responds correctly?
Report results as you go. If automated tests fail, stop and address before asking user to verify anything.
For steps Claude cannot automate, present them sequentially with selectable outcomes:
Step N of M: [Brief description]
**Action:** [Specific instruction - what to do]
**Expected:** [What should happen if working correctly]
Then use AskUserQuestion with predicted outcomes:
Example:
{
"questions": [{
"question": "How does the button look?",
"header": "Visual check",
"options": [
{"label": "Looks correct", "description": "Blue button, proper spacing, readable text"},
{"label": "Wrong color/style", "description": "Button exists but styling is off"},
{"label": "Layout broken", "description": "Elements overlapping or misaligned"},
{"label": "Not visible", "description": "Button missing or hidden"}
],
"multiSelect": false
}]
}
Automated test fails: Stop and fix before proceeding.
User reports issue: Note it, ask if they want to investigate now or continue testing.
After all steps complete: