From developer-workflow
Verifies implemented features against specs or test plans via manual QA on live apps. Accepts Figma mockups, PRDs, acceptance criteria; generates test plans from specs if none provided.
npx claudepluginhub kirich1409/krozov-ai-tools --plugin developer-workflowThis skill uses the workspace's default tool permissions.
Verify that a running application matches its specification. This skill bridges implementation and
Verifies frontend changes against spec acceptance criteria using Playwright MCP for browser automation. Automates spec intake, dev server/auth checks, and test runs.
Generates test plans, manual test cases, regression suites, bug reports, and Figma design validations for QA engineers.
Share bugs, ideas, or general feedback.
Verify that a running application matches its specification. This skill bridges implementation and review — it takes a spec source and/or a test plan, ensures the app is running, launches QA against it, and produces a verification result.
At least one of the two inputs below is required. Both together give the best results, but either one alone is enough to proceed.
The specification defines what "correct" looks like. Accept any combination of:
Read all provided spec sources. If neither a spec nor a test plan is provided, ask the user for at least one before proceeding.
The test plan defines what to check. Three modes:
Test plan only (no spec) — the test plan is the single source of truth. Execute it as-is. The verification result will be based entirely on whether the test cases pass or fail.
Test plan + spec — accept the plan as-is, but cross-reference it against the spec. If the plan has obvious gaps (spec mentions flows the plan doesn't cover), flag them: "The spec mentions X but the test plan doesn't cover it — should I add test cases for that?" Let the user decide.
Spec only (no test plan) — generate a test plan from the spec:
Before launching QA, verify the app is accessible. The approach depends on what's being tested:
list_devices via the mobile MCPinstallDebug, Xcode build, etc.)npm start, npm run dev, ./gradlew bootRun, etc.)If the user says the app is already running or provides a URL / device target, skip the launch step and proceed directly.
Spawn the manual-tester agent with all gathered context. The agent prompt must include:
Example agent prompt structure:
You are testing a feature against its specification.
## Spec
[Paste or reference the spec source here]
## Test Plan
[Paste the test cases here]
## Target
[Device/URL/connection details]
## Scope
Run Smoke + Feature tiers. Report all bugs with severity and evidence.
Deliver a Test Execution Summary with a ship/no-ship recommendation when done.
Let the manual-tester agent handle the full QA cycle: environment setup, test execution, bug reporting, and summary generation. Do not interfere with its process unless it asks a question or reports a P0 blocker.
When the manual-tester agent completes, process its output into a verification result.
The result is one of three states:
| State | Meaning | Condition |
|---|---|---|
| VERIFIED | Feature matches spec | All test cases passed, no P0/P1 bugs |
| FAILED | Feature does not match spec | Any P0 or P1 bug, or critical test cases failed |
| PARTIAL | Feature partially matches spec | Only P2/P3 bugs found, or non-critical test cases failed |
Present a structured report:
## Feature Verification
**Status: [VERIFIED / FAILED / PARTIAL]**
**Spec source:** [what was used]
**Test plan:** [user-provided / generated from spec]
### Summary
[1-3 sentences on the overall state]
### Test Results
- Total: [n] | Passed: [n] | Failed: [n] | Blocked: [n]
### Bugs Found
[List bugs by severity — P0 first, then P1, P2, P3]
[Each with a one-line summary and link to full bug report]
### Recommendation
[Ship / Do not ship / Ship with known issues — and why]
Based on the verification state, guide the user on next steps:
When the user fixes bugs and wants to re-test: