From nickcrew-claude-ctx-plugin
User-triggered multi-agent code review that spawns 3-5 parallel specialist sub-agents to analyze diffs across perspectives like correctness and security, verifies citations against source files, and synthesizes a prioritized review artifact for PRs, multi-commits, or security-sensitive code.
npx claudepluginhub nickcrew/claude-cortexThis skill uses the workspace's default tool permissions.
Team-based code review that spawns 3-5 parallel specialist sub-agents to examine
Orchestrates parallel execution of specialized code review agents for security, architecture, and performance analysis with decision tracking to avoid redundancy. Use for comprehensive reviews of large changesets.
Orchestrates parallel multi-agent code reviews with ≥80% confidence filtering for quality, security, and auto-detected discipline-specific issues via git diffs.
Orchestrates multi-agent code reviews coordinating specialists for quality, security, architecture, performance, compliance using dynamic routing and context management. Use for code review workflows.
Share bugs, ideas, or general feedback.
Team-based code review that spawns 3-5 parallel specialist sub-agents to examine a diff through different perspectives (correctness, security, performance, architecture, etc.), mechanically verifies every finding against the actual source files, and synthesizes a single prioritized review artifact.
This skill is user-triggered. It is not part of routine agent workflows — the cost and authority profile warrants explicit user invocation. Use when:
main..feature-branch) rather than atomic commitsspecialist-review flagged quality concerns that need deeper scrutinyFor atomic-commit review, use the baseline specialist-review.sh from the
agent-loops skill instead — it's 4-6x cheaper and sufficient for small diffs.
This skill requires the Claude Code team API (
TeamCreate,Agent,TaskCreate,SendMessage). It cannot be invoked from Codex or Gemini sessions, or from external shells. The parallel sub-agent orchestration and cross-agent coordination depend on primitives only Claude Code provides.
Approximately $2-3 per review vs ~$2.00 for single-turn specialist-review.
The premium buys multi-perspective coverage, grounded findings (sub-agents read
source files, not just diffs), and mechanical citation verification.
Follow these phases exactly. You are the team lead — do not delegate orchestration.
Generate the diff you want reviewed, then select perspectives:
# Generate the diff
git diff main..HEAD -- src/ > /tmp/review-diff.patch
# Select perspectives (3-5 from the catalog based on file types and content signals)
python3 skills/multi-specialist-review/scripts/triage_perspectives.py /tmp/review-diff.patch
This outputs JSON with perspectives, display_names, focus_areas, and
changed_files. Save the output — you'll use it to parameterize each specialist.
Create the team:
TeamCreate(team_name="review-YYYYMMDD-HHMMSS")
Create one task per perspective:
TaskCreate(subject="Review: Correctness", description="Review changed files through Correctness lens")
Spawn all specialists in a single message for parallel execution. For each perspective from the triage output, use the specialist prompt template:
Agent(
name="specialist-{perspective}",
subagent_type="code-reviewer",
model="sonnet",
team_name="review-YYYYMMDD-HHMMSS",
prompt=<specialist-prompt-template.md with {{PERSPECTIVE}}, {{FOCUS_AREAS}},
{{CHANGED_FILES}}, {{DIFF_CONTENT}} filled in>
)
The template is at skills/multi-specialist-review/references/specialist-prompt-template.md.
Each specialist:
.agents/reviews/specialist-{perspective}-{timestamp}.jsonAssign each task to its corresponding specialist via TaskUpdate.
Wait for all specialists to report completion (automatic via team notifications).
Run the citation verifier on all specialist output files:
python3 skills/multi-specialist-review/scripts/verify_citations.py \
.agents/reviews/specialist-*.json > .agents/reviews/verified-findings.json
This mechanically validates every finding:
Findings that fail verification are stripped with documented reasons.
Read the verified findings JSON and
skills/multi-specialist-review/references/synthesis-prompt.md.
Follow the synthesis instructions to:
Write the final review to .agents/reviews/review-{timestamp}.md.
# Validate the final output (contract validator lives in agent-loops as shared infra)
python3 skills/agent-loops/scripts/validate-review-contract.py code-review \
.agents/reviews/review-{timestamp}.md
Then shut down all teammates and delete the team:
SendMessage(to="*", message={type: "shutdown_request"})
# Wait for shutdown confirmations, then:
TeamDelete()
The output file path is the review artifact — return its path to the user.
Located at skills/multi-specialist-review/references/specialist-prompt-template.md.
A single parameterized template used for all perspectives. Fill these placeholders
before passing to each specialist:
| Placeholder | Source |
|---|---|
{{PERSPECTIVE}} | Display name from triage output (e.g., "Security") |
{{FOCUS_AREAS}} | Focus description from triage output |
{{CHANGED_FILES}} | Bullet list of changed file paths |
{{DIFF_CONTENT}} | The unified diff |
Scripts:
scripts/triage_perspectives.py — Deterministic perspective selection (Phase 0)scripts/verify_citations.py — Mechanical citation verification (Phase 2)References:
references/specialist-prompt-template.md — Parameterized prompt for each specialistreferences/synthesis-prompt.md — Instructions for Phase 3 synthesisExternal dependency:
skills/agent-loops/scripts/validate-review-contract.py — Shared review contract validator
used by both this skill and the single-turn specialist-review.sh flow. Lives in
agent-loops because it's baseline review infrastructure used across multiple flows.agent-loopsThis skill was extracted from agent-loops when we recognized that team-based
review has a different authority profile than routine per-commit review:
agent-loops single-turn review (via specialist-review.sh) — autonomous,
per-atomic-commit, cross-model independence via provider rotation.
Runs continuously during implementation work.multi-specialist-review (this skill) — user-triggered, PR-level or
security-sensitive, within-model multi-perspective diversity, grounded findings
via sub-agent source reading.Implementer agents using agent-loops will flag in their handoff when a change
warrants team review, but they do not invoke this skill themselves. The user
decides and triggers it.
verify_citations.py. Specialists hallucinate file references.specialist-review.sh is 4-6x cheaper and sufficient for atomic commits.