Skill

verify

Install
1
Install the plugin
$
npx claudepluginhub yonatangross/orchestkit --plugin ork

Want just this skill?

Add to a custom plugin, then install with one command.

Description

Comprehensive verification with parallel test agents. Use when verifying implementations or validating changes.

Tool Access

This skill is limited to using the following tools:

AskUserQuestionBashReadWriteEditGrepGlobTaskTaskCreateTaskUpdateTaskListTaskOutputTaskStopmcp__memory__search_nodesmcp__agentation__agentation_get_all_pendingmcp__agentation__agentation_acknowledgemcp__agentation__agentation_resolvemcp__agentation__agentation_watch_annotationsToolSearchCronCreateCronDelete
Supporting Assets
View in Repository
assets/gallery-template.html
assets/quality-policy.yaml
assets/verification-report.md
checklists/verification-checklist.md
references/alternative-comparison.md
references/grading-rubric.md
references/orchestration-mode.md
references/policy-as-code.md
references/quality-model.md
references/report-template.md
references/verification-checklist.md
references/verification-phases.md
references/visual-capture.md
rules/_sections.md
rules/evidence-collection.md
rules/scoring-rubric.md
test-cases.json
Skill Content

Verify Feature

Comprehensive verification using parallel specialized agents with nuanced grading (0-10 scale) and improvement suggestions.

Quick Start

/ork:verify authentication flow
/ork:verify --model=opus user profile feature
/ork:verify --scope=backend database migrations

Argument Resolution

SCOPE = "$ARGUMENTS"       # Full argument string, e.g., "authentication flow"
SCOPE_TOKEN = "$ARGUMENTS[0]"  # First token for flag detection (e.g., "--scope=backend")
# $ARGUMENTS[0], $ARGUMENTS[1] etc. for indexed access (CC 2.1.59)

# Model override detection (CC 2.1.72)
MODEL_OVERRIDE = None
for token in "$ARGUMENTS".split():
    if token.startswith("--model="):
        MODEL_OVERRIDE = token.split("=", 1)[1]  # "opus", "sonnet", "haiku"
        SCOPE = SCOPE.replace(token, "").strip()

Pass MODEL_OVERRIDE to all Agent() calls via model=MODEL_OVERRIDE when set. Accepts symbolic names (opus, sonnet, haiku) or full IDs (claude-opus-4-6) per CC 2.1.74.

Opus 4.6: Agents use native adaptive thinking (no MCP sequential-thinking needed). Extended 128K output supports comprehensive verification reports.


STEP 0: Effort-Aware Verification Scaling (CC 2.1.76)

Scale verification depth based on /effort level:

Effort LevelPhases RunAgentsOutput
lowRun tests only → pass/fail0 agentsQuick check
mediumTests + code quality + security3 agentsScore + top issues
high (default)All 8 phases + visual capture6-7 agentsFull report + grades

Override: Explicit user selection (e.g., "Full verification") overrides /effort downscaling.

STEP 0a: Verify User Intent with AskUserQuestion

BEFORE creating tasks, clarify verification scope:

AskUserQuestion(
  questions=[{
    "question": "What scope for this verification?",
    "header": "Scope",
    "options": [
      {"label": "Full verification (Recommended)", "description": "All tests + security + code quality + visual + grades", "markdown": "```\nFull Verification (10 phases)\n─────────────────────────────\n  7 parallel agents:\n  ┌────────────┐ ┌────────────┐\n  │ Code       │ │ Security   │\n  │ Quality    │ │ Auditor    │\n  ├────────────┤ ├────────────┤\n  │ Test       │ │ Backend    │\n  │ Generator  │ │ Architect  │\n  ├────────────┤ ├────────────┤\n  │ Frontend   │ │ Performance│\n  │ Developer  │ │ Engineer   │\n  ├────────────┤ └────────────┘\n  │ Visual     │\n  │ Capture    │ → gallery.html\n  └────────────┘\n         ▼\n    Composite Score (0-10)\n    8 dimensions + Grade\n    + Visual Gallery\n```"},
      {"label": "Tests only", "description": "Run unit + integration + e2e tests", "markdown": "```\nTests Only\n──────────\n  npm test ──▶ Results\n  ┌─────────────────────┐\n  │ Unit tests     ✓/✗  │\n  │ Integration    ✓/✗  │\n  │ E2E            ✓/✗  │\n  │ Coverage       NN%  │\n  └─────────────────────┘\n  Skip: security, quality, UI\n  Output: Pass/fail + coverage\n```"},
      {"label": "Security audit", "description": "Focus on security vulnerabilities", "markdown": "```\nSecurity Audit\n──────────────\n  security-auditor agent:\n  ┌─────────────────────────┐\n  │ OWASP Top 10       ✓/✗ │\n  │ Dependency CVEs    ✓/✗ │\n  │ Secrets scan       ✓/✗ │\n  │ Auth flow review   ✓/✗ │\n  │ Input validation   ✓/✗ │\n  └─────────────────────────┘\n  Output: Security score 0-10\n          + vulnerability list\n```"},
      {"label": "Code quality", "description": "Lint, types, complexity analysis", "markdown": "```\nCode Quality\n────────────\n  code-quality-reviewer agent:\n  ┌─────────────────────────┐\n  │ Lint errors         N   │\n  │ Type coverage       NN% │\n  │ Cyclomatic complex  N.N │\n  │ Dead code           N   │\n  │ Pattern violations  N   │\n  └─────────────────────────┘\n  Output: Quality score 0-10\n          + refactor suggestions\n```"},
      {"label": "Quick check", "description": "Just run tests, skip detailed analysis", "markdown": "```\nQuick Check (~1 min)\n────────────────────\n  Run tests ──▶ Pass/Fail\n\n  Output:\n  ├── Test results\n  ├── Build status\n  └── Lint status\n  No agents, no grading,\n  no report generation\n```"}
    ],
    "multiSelect": true
  }]
)

Based on answer, adjust workflow:

  • Full verification: All 10 phases (8 + 2.5 + 8.5), 7 parallel agents including visual capture
  • Tests only: Skip phases 2 (security), 5 (UI/UX analysis)
  • Security audit: Focus on security-auditor agent
  • Code quality: Focus on code-quality-reviewer agent
  • Quick check: Run tests only, skip grading and suggestions

STEP 0b: Select Orchestration Mode

Load details: Read("${CLAUDE_SKILL_DIR}/references/orchestration-mode.md") for env var check logic, Agent Teams vs Task Tool comparison, and mode selection rules.

Choose Agent Teams (mesh -- verifiers share findings) or Task tool (star -- all report to lead) based on the orchestration mode reference.


MCP Probe + Resume

ToolSearch(query="select:mcp__memory__search_nodes")
Write(".claude/chain/capabilities.json", { memory, timestamp })

Read(".claude/chain/state.json")  # resume if exists

Handoff File

After verification completes, write results:

Write(".claude/chain/verify-results.json", JSON.stringify({
  "phase": "verify", "skill": "verify",
  "timestamp": now(), "status": "completed",
  "outputs": {
    "tests_passed": N, "tests_failed": N,
    "coverage": "87%", "security_scan": "clean"
  }
}))

Regression Monitor (CC 2.1.71)

Optionally schedule post-verification monitoring:

# Guard: Skip cron in headless/CI (CLAUDE_CODE_DISABLE_CRON)
# if env CLAUDE_CODE_DISABLE_CRON is set, run a single check instead
CronCreate(
  schedule="0 8 * * *",
  prompt="Daily regression check: npm test.
    If 7 consecutive passes → CronDelete.
    If failures → alert with details."
)

Task Management (CC 2.1.16)

# Create main verification task
TaskCreate(
  subject="Verify [feature-name] implementation",
  description="Comprehensive verification with nuanced grading",
  activeForm="Verifying [feature-name] implementation"
)

# Create subtasks for 8-phase process
phases = ["Run code quality checks", "Execute security audit",
          "Verify test coverage", "Validate API", "Check UI/UX",
          "Calculate grades", "Generate suggestions", "Compile report"]
for phase in phases:
    TaskCreate(subject=phase, activeForm=f"{phase}ing")

8-Phase Workflow

Load details: Read("${CLAUDE_SKILL_DIR}/references/verification-phases.md") for complete phase details, agent spawn definitions, Agent Teams alternative, and team teardown.

PhaseActivitiesOutput
1. Context GatheringGit diff, commit historyChanges summary
2. Parallel Agent Dispatch6 agents evaluate0-10 scores
2.5 Visual CaptureScreenshot routes, AI vision evalGallery + visual score
3. Test ExecutionBackend + frontend testsCoverage data
4. Nuanced GradingComposite score calculationGrade (A-F)
5. Improvement SuggestionsEffort vs impact analysisPrioritized list
6. Alternative ComparisonCompare approaches (optional)Recommendation
7. Metrics TrackingTrend analysisHistorical data
8. Report CompilationEvidence artifacts + gallery.htmlFinal report
8.5 Agentation LoopUser annotates, ui-feedback fixesBefore/after diffs

Phase 2 Agents (Quick Reference)

AgentFocusOutput
code-quality-reviewerLint, types, patternsQuality 0-10
security-auditorOWASP, secrets, CVEsSecurity 0-10
test-generatorCoverage, test qualityCoverage 0-10
backend-system-architectAPI design, asyncAPI 0-10
frontend-ui-developerReact 19, Zod, a11yUI 0-10
python-performance-engineerLatency, resources, scalingPerformance 0-10

Launch ALL agents in ONE message with run_in_background=True and max_turns=25.

Progressive Output (CC 2.1.76)

Output each agent's score as soon as it completes — don't wait for all 6-7 agents:

Security:     8.2/10 — No critical vulnerabilities found
Code Quality: 7.5/10 — 3 complexity hotspots identified
[...remaining agents still running...]

This gives users real-time visibility into multi-agent verification. If any dimension scores below the security_minimum threshold (default 5.0), flag it as a blocker immediately — the user can terminate early without waiting for remaining agents.

Phase 2.5: Visual Capture (NEW — runs in parallel with Phase 2)

Load details: Read("${CLAUDE_SKILL_DIR}/references/visual-capture.md") for auto-detection, route discovery, screenshot capture, and AI vision evaluation.

Summary: Auto-detects project framework, starts dev server, discovers routes, uses agent-browser to screenshot each route, evaluates with Claude vision, generates self-contained gallery.html with base64-embedded images.

Output: verification-output/{timestamp}/gallery.html — open in browser to see all screenshots with AI evaluations, scores, and annotation diffs.

Graceful degradation: If no frontend detected or server won't start, skips visual capture with a warning — never blocks verification.

Phase 8.5: Agentation Visual Feedback (opt-in)

Load details: Read("${CLAUDE_SKILL_DIR}/references/visual-capture.md") (Phase 8.5 section) for agentation loop workflow.

Trigger: Only when agentation MCP is configured. Offers user the choice to annotate the live UI. ui-feedback agent processes annotations, re-screenshots show before/after.


Grading & Scoring

Load Read("${CLAUDE_PLUGIN_ROOT}/skills/quality-gates/references/unified-scoring-framework.md") for dimensions, weights, grade thresholds, and improvement prioritization. Load Read("${CLAUDE_SKILL_DIR}/references/quality-model.md") for verify-specific extensions (Visual dimension). Load Read("${CLAUDE_SKILL_DIR}/references/grading-rubric.md") for per-agent scoring criteria.


Evidence & Test Execution

Load details: Read("${CLAUDE_SKILL_DIR}/rules/evidence-collection.md") for git commands, test execution patterns, metrics tracking, and post-verification feedback.


Policy-as-Code

Load details: Read("${CLAUDE_SKILL_DIR}/references/policy-as-code.md") for configuration.

Define verification rules in .claude/policies/verification-policy.json:

{
  "thresholds": {
    "composite_minimum": 6.0,
    "security_minimum": 7.0,
    "coverage_minimum": 70
  },
  "blocking_rules": [
    {"dimension": "security", "below": 5.0, "action": "block"}
  ]
}

Report Format

Load details: Read("${CLAUDE_SKILL_DIR}/references/report-template.md") for full format. Summary:

# Feature Verification Report

**Composite Score: [N.N]/10** (Grade: [LETTER])

## Verdict
**[READY FOR MERGE | IMPROVEMENTS RECOMMENDED | BLOCKED]**

References

Load on demand with Read("${CLAUDE_SKILL_DIR}/references/<file>"):

FileContent
verification-phases.md8-phase workflow, agent spawn definitions, Agent Teams mode
visual-capture.mdPhase 2.5 + 8.5: screenshot capture, AI vision, gallery generation, agentation loop
quality-model.mdScoring dimensions and weights (8 unified)
grading-rubric.mdPer-agent scoring criteria
report-template.mdFull report format with visual evidence section
alternative-comparison.mdApproach comparison template
orchestration-mode.mdAgent Teams vs Task Tool
policy-as-code.mdVerification policy configuration
verification-checklist.mdPre-flight checklist

Rules

Load on demand with Read("${CLAUDE_SKILL_DIR}/rules/<file>"):

FileContent
scoring-rubric.mdComposite scoring, grades, verdicts
evidence-collection.mdEvidence gathering and test patterns

Related Skills

  • ork:implement - Full implementation with verification
  • ork:review-pr - PR-specific verification
  • testing-unit / testing-integration / testing-e2e - Test execution patterns
  • ork:quality-gates - Quality gate patterns
  • browser-tools - Browser automation for visual capture

Version: 4.2.0 (March 2026) — Added progressive output for incremental agent scores

Stats
Stars128
Forks14
Last CommitMar 20, 2026
Actions

Similar Skills