Comprehensive verification with parallel test agents. Use when verifying implementations or validating changes.
Verifies code implementations using parallel specialized agents to produce nuanced grades and improvement suggestions.
/plugin marketplace add yonatangross/orchestkit/plugin install orkl@orchestkitThis skill is limited to using the following tools:
assets/quality-policy.yamlassets/verification-report.mdchecklists/verification-checklist.mdreferences/alternative-comparison.mdreferences/grading-rubric.mdreferences/orchestration-mode.mdreferences/policy-as-code.mdreferences/quality-model.mdreferences/report-template.mdreferences/verification-checklist.mdreferences/verification-phases.mdrules/_sections.mdrules/evidence-collection.mdrules/scoring-rubric.mdtest-cases.jsonComprehensive verification using parallel specialized agents with nuanced grading (0-10 scale) and improvement suggestions.
/ork:verify authentication flow
/ork:verify user profile feature
/ork:verify --scope=backend database migrations
Opus 4.6: Agents use native adaptive thinking (no MCP sequential-thinking needed). Extended 128K output supports comprehensive verification reports.
BEFORE creating tasks, clarify verification scope:
AskUserQuestion(
questions=[{
"question": "What scope for this verification?",
"header": "Scope",
"options": [
{"label": "Full verification (Recommended)", "description": "All tests + security + code quality + grades"},
{"label": "Tests only", "description": "Run unit + integration + e2e tests"},
{"label": "Security audit", "description": "Focus on security vulnerabilities"},
{"label": "Code quality", "description": "Lint, types, complexity analysis"},
{"label": "Quick check", "description": "Just run tests, skip detailed analysis"}
],
"multiSelect": false
}]
)
Based on answer, adjust workflow:
See Orchestration Mode for env var check logic, Agent Teams vs Task Tool comparison, and mode selection rules.
Choose Agent Teams (mesh -- verifiers share findings) or Task tool (star -- all report to lead) based on the orchestration mode reference.
# Create main verification task
TaskCreate(
subject="Verify [feature-name] implementation",
description="Comprehensive verification with nuanced grading",
activeForm="Verifying [feature-name] implementation"
)
# Create subtasks for 8-phase process
phases = ["Run code quality checks", "Execute security audit",
"Verify test coverage", "Validate API", "Check UI/UX",
"Calculate grades", "Generate suggestions", "Compile report"]
for phase in phases:
TaskCreate(subject=phase, activeForm=f"{phase}ing")
See Verification Phases for complete phase details, agent spawn definitions, Agent Teams alternative, and team teardown.
| Phase | Activities | Output |
|---|---|---|
| 1. Context Gathering | Git diff, commit history | Changes summary |
| 2. Parallel Agent Dispatch | 6 agents evaluate | 0-10 scores |
| 3. Test Execution | Backend + frontend tests | Coverage data |
| 4. Nuanced Grading | Composite score calculation | Grade (A-F) |
| 5. Improvement Suggestions | Effort vs impact analysis | Prioritized list |
| 6. Alternative Comparison | Compare approaches (optional) | Recommendation |
| 7. Metrics Tracking | Trend analysis | Historical data |
| 8. Report Compilation | Evidence artifacts | Final report |
| Agent | Focus | Output |
|---|---|---|
| code-quality-reviewer | Lint, types, patterns | Quality 0-10 |
| security-auditor | OWASP, secrets, CVEs | Security 0-10 |
| test-generator | Coverage, test quality | Coverage 0-10 |
| backend-system-architect | API design, async | API 0-10 |
| frontend-ui-developer | React 19, Zod, a11y | UI 0-10 |
| python-performance-engineer | Latency, resources, scaling | Performance 0-10 |
Launch ALL agents in ONE message with run_in_background=True and max_turns=25.
See Scoring Rubric for composite formula, grade thresholds, verdict criteria, and blocking rules. See Quality Model for dimension weights. See Grading Rubric for per-agent scoring criteria.
See Evidence Collection for git commands, test execution patterns, metrics tracking, and post-verification feedback.
See Policy-as-Code for configuration.
Define verification rules in .claude/policies/verification-policy.json:
{
"thresholds": {
"composite_minimum": 6.0,
"security_minimum": 7.0,
"coverage_minimum": 70
},
"blocking_rules": [
{"dimension": "security", "below": 5.0, "action": "block"}
]
}
See Report Template for full format. Summary:
# Feature Verification Report
**Composite Score: [N.N]/10** (Grade: [LETTER])
## Verdict
**[READY FOR MERGE | IMPROVEMENTS RECOMMENDED | BLOCKED]**
ork:implement - Full implementation with verificationork:review-pr - PR-specific verificationrun-tests - Detailed test executionork:quality-gates - Quality gate patternsVersion: 3.1.0 (February 2026)
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.