Run comprehensive verification with multiple agents (reviewer, tester, UX, coherence)
npx claudepluginhub niekcandaele/claude-helpersRun comprehensive verification before considering changes complete. This command launches multiple specialized agents in parallel to review code quality, run tests, and validate user experience.
Optional arguments: $ARGUMENTS
Scope Control:
--scope=staged: Verify only staged changes--scope=unstaged: Verify only unstaged modified files--scope=branch: Verify all changes in current branch vs base--scope=all: Verify entire codebase (comprehensive audit)--files="file1,file2": Verify specific files only--module=path: Verify specific module/directoryOther Options:
--skip-ux: Skip UX review for pure backend changes--skip-security: Skip security review (not recommended)Before verification, determine what files/changes to focus on:
1. Parse User-Specified Scope (if provided):
$ARGUMENTS for --scope=, --files=, or --module= flags2. Auto-Detect Scope (default behavior):
Priority order:
Git Commands for Scope Detection:
# Detect staged files
STAGED=$(git diff --cached --name-only)
# Detect unstaged modified files
UNSTAGED=$(git diff --name-only)
# Get all uncommitted changes
ALL_CHANGES=$(git diff HEAD --name-only)
# Get branch changes (for --scope=branch)
BASE_BRANCH=$(git symbolic-ref refs/remotes/origin/HEAD 2>/dev/null | sed 's@^refs/remotes/origin/@@' || echo "main")
MERGE_BASE=$(git merge-base HEAD origin/$BASE_BRANCH 2>/dev/null || git merge-base HEAD $BASE_BRANCH 2>/dev/null || echo "HEAD")
BRANCH_FILES=$(git diff --name-only $MERGE_BASE..HEAD)
# Get changed line ranges per file (for detailed scope)
git diff --unified=0 HEAD | grep -E '^\+\+\+ b/|^@@' > /tmp/changes.txt
# Parse to extract: file.ts (lines 45-67, 89-102)
3. Build Scope Context:
Create a list of files in scope with status:
file.ts (modified, lines 45-67, 89-102)new-file.ts (added, entire file)old-file.ts (deleted)4. Format Scope for Agents:
Pass scope to each agent as:
VERIFICATION SCOPE:
Files in scope:
- src/auth/login.ts (modified, lines 45-67, 89-102)
- src/auth/middleware.ts (modified, lines 12-34)
- tests/auth/login.test.ts (added, entire file)
CRITICAL SCOPE CONSTRAINTS:
- ONLY flag issues in code that was ADDED or MODIFIED in these files/lines
- DO NOT flag issues in surrounding context or old code unless it blocks the new changes
- DO NOT flag issues in other files not listed above
- Focus exclusively on the quality of the NEW or CHANGED code
Exception: You MAY flag issues in old code IF:
1. The new changes directly interact with or depend on that old code
2. The old code issue is causing the new code to be incorrect
3. The old code issue creates a blocker for the new functionality
Git commands to see your scoped changes:
git diff HEAD -- <scoped-files>
git diff --cached -- <scoped-files>
cata-reviewer: Code review for design adherence, over-engineering, AI slopcata-tester: Execute test suite and report failurescata-ux-reviewer: Test user-facing changes (unless --skip-ux or clearly backend-only)cata-coherence: Check if changes fit in the codebase (reinvented wheels, pattern violations, stale docs/AI tooling)cata-architect: Architectural health analysis (module boundaries, dependency direction, abstraction gaps, god objects)cata-security: Security vulnerability detection (unless --skip-security)cata-coderabbit: Mandatory CodeRabbit CLI automated analysis (cannot be skipped)This is the most important principle of this command.
The AI has a tendency to soften or hide issues. This is UNACCEPTABLE. The report must be brutally honest:
All agents use a numeric severity scale instead of categorical labels:
| Range | Impact | Examples |
|---|---|---|
| 9-10 | Critical | Data loss, security vulnerability, cannot function |
| 7-8 | High | Major functionality broken, significant problems |
| 5-6 | Moderate | Clear issues, workarounds exist |
| 3-4 | Low | Minor issues, slight inconvenience |
| 1-2 | Trivial | Polish, cosmetic, optional improvements |
Important: Severity reflects "how big is this issue?" - NOT "must you fix it?" The human decides what to act on.
Launch all agents in a single message using the Task tool. CRITICAL: Include scope information in each agent prompt.
Template for Agent Prompts:
Each agent prompt MUST include the scope context at the beginning:
VERIFICATION SCOPE:
[Insert determined scope here - list of files with line ranges]
CRITICAL SCOPE CONSTRAINTS:
- ONLY flag issues in code that was ADDED or MODIFIED in the scoped files/lines
- DO NOT flag issues in surrounding context or old code unless it blocks the new changes
- DO NOT flag issues in other files not listed in scope
- Focus exclusively on the quality of the NEW or CHANGED code
Exception: You MAY flag issues in old code IF:
1. The new changes directly interact with or depend on that old code
2. The old code issue is causing the new code to be incorrect
3. The old code issue creates a blocker for the new functionality
[Agent-specific instructions below...]
Agent Invocation Examples:
# Agent 1: Code Review
Task tool with:
- subagent_type: "cata-reviewer"
- description: "Review code changes in scope"
- prompt: "VERIFICATION SCOPE:
Files in scope:
[Insert scope list here - e.g.:]
- src/auth/login.ts (modified, lines 45-67, 89-102)
- src/auth/middleware.ts (modified, lines 12-34)
- tests/auth/login.test.ts (added, entire file)
CRITICAL SCOPE CONSTRAINTS:
- ONLY review code in the files and line ranges listed above
- Flag issues ONLY in newly added or modified code
- Ignore issues in old code unless they block the new changes
- Do not review files outside this scope
- Focus on the quality and correctness of THIS change set
When checking design adherence, cross-cutting completeness, etc:
- Verify that changes in scope are complete (e.g., if route added, check if tests exist)
- But do NOT audit the entire codebase for unrelated issues
Exception: Flag old code issues IF they directly impact the new changes.
Use git diff to see the actual changes:
git diff HEAD -- [scoped files]
Review for: design adherence, over-engineering, AI slop, structural completeness.
OUTPUT FORMAT: For each issue found, provide:
- Title (short description)
- Severity (1-10, where 1=trivial, 10=critical)
- Location (file:line)
- Description (what the issue is and why it matters)"
# Agent 2: Test Execution
Task tool with:
- subagent_type: "cata-tester"
- description: "Run test suite"
- prompt: "VERIFICATION SCOPE AWARENESS:
The current change set modified these files:
[Insert scope list here]
Run the full test suite for this repository.
First discover the test framework:
- Check package.json for test scripts
- Check for pytest, cargo test, go test, etc.
Execute the full test suite.
Report exact pass/fail counts.
If tests cannot run, report what prevented execution.
When reporting failures, for EACH failure provide:
- Title (short description of what failed)
- Severity (1-10, where 1=trivial, 10=critical)
- Location (test file:line)
- Description (error message, expected vs actual)
- Scope annotation: IN-SCOPE or OUT-OF-SCOPE relative to changed files"
# Agent 3: UX Review (unless skipped)
Task tool with:
- subagent_type: "cata-ux-reviewer"
- description: "Review user-facing changes in scope"
- prompt: "VERIFICATION SCOPE:
Files in scope:
[Insert scope list here]
UX REVIEW CONSTRAINTS:
- ONLY test user-facing changes in the scoped files
- Do not audit the entire UI/CLI for issues
- Focus on the UX of what changed in this scope
- Ignore UX issues in unchanged parts of the application
Review user experience for the scoped changes.
Test any UI, CLI output, error messages, or API responses that were modified.
Report usability issues and friction points in THE SCOPED CHANGES ONLY.
OUTPUT FORMAT: For each issue found, provide:
- Title (short description)
- Severity (1-10, where 1=trivial, 10=critical)
- Location (page/component/command)
- Description (user impact and what you observed)"
# Agent 4: Coherence Check
Task tool with:
- subagent_type: "cata-coherence"
- description: "Check if scoped changes fit in codebase"
- prompt: "VERIFICATION SCOPE:
Files in scope:
[Insert scope list here]
COHERENCE CONSTRAINTS:
- Check if THESE specific changes follow codebase patterns
- Look for reinvented wheels in THIS change set
- Verify THIS change doesn't violate existing patterns
- Check documentation that relates to THESE changed files
- Do not audit the entire codebase for pattern violations
- Focus on: 'Do these new changes fit well?'
Research existing patterns, utilities, and conventions relevant to the scoped changes.
Look for reinvented wheels - utilities that already exist that these changes duplicate.
Check for pattern violations - different approaches than rest of codebase.
Verify AI tooling (.claude/) matches actual behavior IF the scoped changes touch AI tooling.
Check if documentation reflects the scoped code changes.
OUTPUT FORMAT: For each issue found, provide:
- Title (short description)
- Severity (1-10, where 1=trivial, 10=critical)
- Location (file:line)
- Description (what the issue is and existing pattern to follow)"
# Agent 5: Architecture Review
Task tool with:
- subagent_type: "cata-architect"
- description: "Analyze architectural health of scoped changes"
- prompt: "VERIFICATION SCOPE:
Files in scope:
[Insert scope list here]
ARCHITECTURE REVIEW CONSTRAINTS:
- Analyze the ARCHITECTURAL IMPACT of these specific changes
- Check if changes degrade structural health (module boundaries, dependency direction, layering)
- Look for abstraction opportunities these changes reveal (3+ duplications)
- Check for god object growth in changed files
- Do NOT audit the entire codebase for pre-existing architectural debt
- Focus on: 'Do these changes maintain healthy architecture?'
First research the project's architecture:
- Discover the module/directory structure and layering
- Understand dependency direction patterns
- Check file sizes of changed files
- Look for existing abstractions the changes might duplicate
Then analyze the scoped changes for:
- Module boundary violations
- Dependency direction violations
- God object growth
- Missing abstractions (logic in 3+ places)
- Separation of concerns issues
- Circular dependencies
- API surface bloat
- Tight coupling
OUTPUT FORMAT: For each issue found, provide:
- Title (short description)
- Severity (1-10, where 1=trivial, 10=critical)
- Location (file:line)
- Category (Abstraction Opportunity / Module Boundary / Dependency Direction / God Object / Separation of Concerns / Circular Dependency / API Surface / Coupling)
- Description (what the structural issue is, evidence, and trajectory impact)"
# Agent 6: Security Review (unless --skip-security)
Task tool with:
- subagent_type: "cata-security"
- description: "Security vulnerability detection in scope"
- prompt: "VERIFICATION SCOPE:
Files in scope:
[Insert scope list here]
SECURITY REVIEW CONSTRAINTS:
- ONLY flag security issues in code that was ADDED or MODIFIED
- First research how security is done in THIS codebase (auth, tenant isolation, validation)
- Flag deviations from established security patterns
- Do not audit the entire codebase for security issues
- Focus on: 'Does this new code introduce security vulnerabilities?'
Research existing security patterns:
- How authentication/authorization works
- How tenant isolation is implemented
- What input validation patterns exist
- What sanitization utilities are available
Then analyze the scoped changes for:
- Injection vulnerabilities (SQL, command, XSS)
- Authentication/authorization issues
- Multi-tenant data isolation violations
- Data exposure (secrets, sensitive data in logs/responses)
- Web security issues (cookies, CORS, CSRF)
- Cryptography issues
OUTPUT FORMAT: For each issue found, provide:
- Title (short description)
- Severity (1-10, where 1=trivial, 10=critical - multi-tenant leaks are always 9-10)
- Location (file:line)
- Category (Injection/Auth/Multi-Tenant/Data Exposure/Web Security/Crypto/Config)
- Description (what the vulnerability is and attack vector)"
# Agent 7: CodeRabbit Analysis (MANDATORY — cannot be skipped)
Task tool with:
- subagent_type: "cata-coderabbit"
- description: "Run CodeRabbit CLI automated analysis"
- prompt: "VERIFICATION SCOPE:
Files in scope:
[Insert scope list here]
CRITICAL SCOPE CONSTRAINTS:
- Run CodeRabbit CLI analysis on the scoped changes
- Report all findings with severity ratings
- If CodeRabbit is not installed or authenticated, report as SEV10
Run CodeRabbit CLI review on the changes in scope.
Check installation and authentication first — if either fails, report SEV10 immediately.
Use full output mode (NO --prompt-only flag).
Wait for completion — do NOT run in background or abort early.
OUTPUT FORMAT: For each issue found, provide:
- Title (short description)
- Severity (1-10, where 1=trivial, 10=critical)
- Location (file:line)
- Category (Bug Risk / Security / Performance / Code Quality / Style)
- Description (full CodeRabbit reasoning with context)"
Important: Replace [Insert scope list here] with the actual scope determined in step 1.
After the initial 7 agents complete, check if cata-tester OR cata-ux-reviewer reported failures. If either failed:
# Agent 8: Debug Analysis (conditional)
Task tool with:
- subagent_type: "cata-debugger"
- description: "Analyze test/UX failures in scope"
- prompt: "VERIFICATION SCOPE CONTEXT:
Files changed in this scope:
[Insert scope list here]
DEBUGGING SCOPE:
- Focus on failures that could be caused by these recent changes
- Investigate the interaction between new code and existing code
- If failures are unrelated to the scope, note that explicitly
Analyze the root cause of the failures reported by verification agents.
Review the test failures and/or UX issues found.
Use git, logs, and available tools to investigate.
Provide detailed diagnostic report without fixing anything.
Focus on identifying WHY the failures occurred, especially in relation to the scoped changes.
OUTPUT FORMAT: For each root cause identified, provide:
- Title (short description of root cause)
- Severity (1-10, where 1=trivial, 10=critical)
- Location (file:line or area)
- Description (detailed diagnosis and evidence)"
Only launch this agent if there are actual failures to analyze. Skip if all tests passed and UX review found no high-severity issues.
After the initial 7 agents complete, launch cata-exerciser to actually run and test the application.
This step verifies that the feature works when you actually use it, not just when automated tests run.
Automated tests pass, but the app can still be broken:
The exerciser catches these by actually running the app.
ALWAYS. No skip flag. Non-negotiable.
If the exerciser cannot complete (no app, can't start, etc.), that is reported as a severity 9-10 issue - not silently skipped.
Even for pure libraries or config-only changes, the exerciser should attempt to run and report what it finds.
# Agent: Manual Exercise
Task tool with:
- subagent_type: "cata-exerciser"
- description: "Exercise feature end-to-end"
- prompt: "VERIFICATION SCOPE:
Files in scope:
[Insert scope list here]
Exercise the application end-to-end:
1. Start the application (docker compose, npm run dev, etc.)
2. Navigate to the feature affected by these changes
3. Exercise the feature as a user would
4. Report whether it works
If you hit a barrier (can't start, need credentials, unclear what to test):
- Return BLOCKED status with specific reason
- I will ask the user for help if needed
OUTPUT FORMAT: For each issue found, provide:
- Title (short description)
- Severity (1-10, where 1=trivial, 10=critical)
- Location (where in the app)
- Description (what failed and what you observed)"
If cata-exerciser returns BLOCKED with reason LOGIN_REQUIRED or UNCLEAR_FEATURE:
Use AskUserQuestion to get help from the user:
Re-launch cata-exerciser with the user's response added to the prompt
If user can't help or second attempt also fails: Final status is BLOCKED
If cata-exerciser cannot complete, report it factually with severity:
Manual verification barriers are reported as high-severity issues in the Issues Found table.
Only skip cata-ux-reviewer when ALL of these are true:
When in doubt, RUN THE UX REVIEW. It's better to review unnecessarily than to miss issues.
Discover and run tests based on what exists in the repository:
# Check package.json for test script
cat package.json | jq -r '.scripts.test // empty'
# Run: npm test, yarn test, pnpm test
# Check for pytest
pytest --version && pytest
# Or: python -m pytest
# Or: python -m unittest discover
cargo test
go test ./...
Look for Makefile targets, CI configuration, or README instructions.
After all agents complete, generate this unified report:
# Verification Report
## Scope
**Mode:** [staged / unstaged / branch / all / files / module]
**Files Verified:**
- src/auth/login.ts (modified, lines 45-67, 89-102)
- src/auth/middleware.ts (modified, lines 12-34)
- tests/auth/login.test.ts (added, entire file)
**Files Excluded:** All other files in codebase (not in scope for this verification)
---
## Agent Results Summary
| Agent | Status | Notes |
|-------|--------|-------|
| cata-tester | X passed, Y failed | [brief note if any] |
| cata-reviewer | Completed | Found N items |
| cata-ux-reviewer | Completed / Skipped | Found N items / [reason] |
| cata-coherence | Completed | Found N items |
| cata-architect | Completed | Found N items |
| cata-security | Completed / Skipped | Found N items / [reason] |
| cata-coderabbit | Completed / FAILED | Found N items / NOT INSTALLED (SEV10) |
| cata-exerciser | PASSED / FAILED / BLOCKED | [reason if blocked] |
| cata-debugger | Ran / N/A | [if applicable] |
---
## Issues Found
[Deduplicated issues from all agents, sorted by severity descending]
Issues are assigned **VI-{n}** IDs (Verification Issue) for easy reference during discussion.
| ID | Sev | Title | Sources | Description |
|----|-----|-------|---------|-------------|
| VI-1 | 9 | [Short title] | tester, reviewer | [Combined description from all agents that flagged this] |
| VI-2 | 7 | [Short title] | reviewer, coherence | [Description with context] |
| VI-3 | 5 | [Short title] | ux | [Description] |
| VI-4 | 3 | [Short title] | coherence | [Description] |
*Severity: 9-10 Critical | 7-8 High | 5-6 Moderate | 3-4 Low | 1-2 Trivial*
*Sources column shows which agents flagged the issue. Multiple sources = higher confidence the issue is real.*
**Total: N issues from M agent findings (deduplicated)**
---
Proceeding to interactive triage for all N issues...
When combining agent findings into the final report:
Collect all findings from each agent in structured format (title, severity, location, description)
Identify duplicates - same underlying issue flagged by multiple agents:
Merge into single issue:
Assign sequential IDs (VI-1, VI-2, VI-3...) to deduplicated issues
Sort by severity descending (most severe first)
Show deduplication stats - "N issues from M agent findings"
Example Deduplication:
Agent findings:
- cata-reviewer: "Missing error handling in auth.ts" (severity: 6)
- cata-tester: "Test fails - unhandled exception in auth flow" (severity: 8)
- cata-ux-reviewer: "User sees cryptic error on login failure" (severity: 7)
Merged into:
| VI-1 | 8 | Unhandled auth error | reviewer, tester, ux | Missing error handling causes test failure and cryptic user-facing error |
After presenting the report, present ALL issues to the user in batches of up to 4 using AskUserQuestion. This replaces the old "STOP and wait" behavior with an interactive decision flow.
Batch up to 4 issues into each AskUserQuestion call to minimize waiting between decisions (4 is the maximum supported by the AskUserQuestion tool). If there are more than 4 issues, make multiple calls with 4 issues each. For example, if 10 issues exist, present 3 batches: 4, 4, and 2.
After the user responds to one batch, the next batch is presented automatically until all issues have been triaged.
This is a critical requirement. DO NOT BE LAZY.
You MUST present EVERY SINGLE ISSUE to the user for triage, regardless of:
Violations:
Correct behavior:
The user decides what to skip, not you.
For each issue, before presenting it to the user:
Good fix proposals:
fetchUser() in auth.ts:45, return 401 with clear error message"WHERE tenant_id = $1 to query in db.ts:89 to enforce tenant isolation"innerHTML with textContent in render.ts:23 to prevent XSS"Bad fix proposals (too generic):
Batch up to 4 issues per call. For each batch, use this format:
AskUserQuestion:
questions:
- header: "VI-1"
question: "{Title} — {Description with context}. Found at {file:line} by: {source agents} (severity {N})"
multiSelect: false
options:
- label: "{Fix option 1 — short name}"
description: "{Specific action with exact file and line reference}"
- label: "{Fix option 2 — short name}"
description: "{Alternative specific action with exact file and line reference}"
- label: "Explain"
description: "Get the full picture before deciding"
- label: "Skip"
description: "Accept this issue — will not fix in this change set"
# Repeat same pattern for VI-2, VI-3, VI-4 (up to 4 questions per batch)
If there are more than 4 issues, make another AskUserQuestion call for the next batch.
AskUserQuestion:
questions:
- header: "VI-1"
question: "Unhandled auth error — Missing error handling in fetchUser() causes unhandled exception and cryptic user-facing error. Found at src/auth/login.ts:45 by: reviewer, tester, ux (severity 9)"
multiSelect: false
options:
- label: "Add try-catch"
description: "Wrap fetchUser() call in try-catch at auth.ts:45, return 401 with clear error message"
- label: "Add validation"
description: "Add input validation before fetchUser() call to reject malformed credentials early"
- label: "Explain"
description: "Get the full picture before deciding"
- label: "Skip"
description: "Accept this issue — will not fix in this change set"
- header: "VI-2"
question: "SQL injection risk — User input passed directly to query without sanitization. Found at src/db/users.ts:89 by: security (severity 8)"
multiSelect: false
options:
- label: "Use parameterized query"
description: "Replace string concatenation with parameterized query at db/users.ts:89"
- label: "Explain"
description: "Get the full picture before deciding"
- label: "Skip"
description: "Accept this issue — will not fix in this change set"
- header: "VI-3"
question: "Missing test coverage — New auth flow has no unit tests. Found at src/auth/ by: reviewer (severity 5)"
multiSelect: false
options:
- label: "Add unit tests"
description: "Create tests/auth/login.test.ts with tests for success and error cases"
- label: "Explain"
description: "Get the full picture before deciding"
- label: "Skip"
description: "Accept this issue — will not fix in this change set"
- header: "VI-4"
question: "Inconsistent naming — Variable uses camelCase but codebase convention is snake_case. Found at src/utils/helpers.ts:23 by: coherence (severity 2)"
multiSelect: false
options:
- label: "Rename to snake_case"
description: "Rename userName to user_name at helpers.ts:23"
- label: "Explain"
description: "Get the full picture before deciding"
- label: "Skip"
description: "Accept this issue — will not fix in this change set"
When the user selects "Explain" for an issue, they're saying: "I don't have enough context to make this call — give me the full picture."
What to do:
AskUserQuestion (on its own, not batched with other issues) with a much richer question field that covers:
Tone: Like a thorough code review comment that gives you everything you need to evaluate the issue. Technical depth and jargon are fine — just don't be terse or assume the reader has the relevant code open in front of them.
After all issues have been triaged (or triage ended early):
Show a table summarizing all decisions:
## Triage Decisions
| ID | Sev | Title | Decision |
|----|-----|-------|----------|
| VI-1 | 9 | Unhandled auth error | FIX: Add try-catch |
| VI-2 | 7 | SQL injection risk | SKIP |
| VI-3 | 5 | Missing test coverage | FIX: Add unit test |
| VI-4 | 2 | Minor typo in comment | SKIP |
**Fixes to apply: 2 | Skipped: 2**
After showing the triage summary, if there are fixes to apply:
EnterPlanMode to transition into planning. This restricts you to read-only operations and writing the plan file — which is exactly what you need to research and plan without accidentally modifying code.ExitPlanMode to present the plan for user approval. The user can approve, request changes, or cancel.IMPORTANT — Do NOT get confused by plan mode. Plan mode is intentional here. You are deliberately entering plan mode to write a careful implementation plan. This is NOT an error or unexpected state. The verify command explicitly tells you to do this. Read files, analyze, write the plan, and call ExitPlanMode. That's the workflow.
After plan approval, execute fixes according to the approved plan:
For each fix in the planned order:
If a fix fails or produces invalid code:
After all fixes are applied:
## Fixes Applied
| File | Change |
|------|--------|
| src/auth/login.ts | Added try-catch around fetchUser() at line 45 |
| tests/auth/login.test.ts | Added unit test for auth error handling |
**Suggest:** Re-run `/verify` to confirm fixes are clean.
After showing the completion summary, STOP.
/verify (this avoids infinite loops)Determine verification scope:
$ARGUMENTS for scope flags (--scope=, --files=, --module=)Check for changes:
git status
git diff --stat [scope-specific args]
Determine if UX review needed:
$ARGUMENTS for --skip-ux flagFormat scope for agents:
Launch agents in parallel:
Collect results:
Launch exerciser:
Launch debugger if failures:
Generate unified report:
Present report:
Interactive issue triage (if issues exist):
Plan approved fixes:
EnterPlanMode to transition into planningExitPlanMode to present the plan for user approvalExecute planned fixes:
/verify to confirm fixes are cleanAfter completing the triage + fix execution flow (or presenting the report if no issues), you MUST STOP.
If issues exist:
EnterPlanMode to research and plan all approved fixesExitPlanMode to present plan for user approvalIf no issues exist:
DO NOT:
/verify automatically after applying fixes (avoids infinite loops)DO:
/verify to confirm fixes are clean/verifyComprehensive verification with parallel test agents. Use when verifying implementations or validating changes.
/verifyComprehensive verification with parallel test agents. Use when verifying implementations or validating changes.