Use when requesting or receiving code review — covers dispatching reviewer subagents, handling feedback with technical rigor, and avoiding performative agreement
From clavainnpx claudepluginhub mistakeknot/interagency-marketplace --plugin clavainThis skill uses the workspace's default tool permissions.
SKILL-compact.mdcode-reviewer.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Two sides: requesting reviews and receiving feedback. Core principle: Review early, often. Verify before implementing. Technical correctness over social comfort.
Dispatch clavain:plan-reviewer subagent to catch issues before they cascade.
When mandatory: After each task in subagent-driven development; after major feature; before merge to main. When optional: When stuck; before refactoring; after fixing complex bug.
BASE_SHA=$(git rev-parse HEAD~1) # or origin/main
HEAD_SHA=$(git rev-parse HEAD)
Use Task tool with clavain:plan-reviewer type. Fill template at code-review-discipline/code-reviewer.md with: {WHAT_WAS_IMPLEMENTED}, {PLAN_OR_REQUIREMENTS}, {BASE_SHA}, {HEAD_SHA}, {DESCRIPTION}.
Act on feedback: Fix Critical immediately; fix Important before proceeding; note Minor for later; push back with reasoning if reviewer is wrong.
Integration:
1. READ: Complete feedback without reacting
2. UNDERSTAND: Restate requirement in own words (or ask)
3. VERIFY: Check against codebase reality
4. EVALUATE: Technically sound for THIS codebase?
5. RESPOND: Technical acknowledgment or reasoned pushback
6. IMPLEMENT: One item at a time, test each
Never: "You're absolutely right!" / "Great point!" / "Let me implement that now" (before verification). Instead: restate the technical requirement, ask clarifying questions, push back with technical reasoning, or just start working.
STOP — do not implement anything. Ask for clarification first. Items may be related; partial understanding → wrong implementation.
Human partner: Trusted — implement after understanding. No performative agreement. Skip to action.
External reviewers: Before implementing:
Push back with technical reasoning if wrong. If conflicts with human partner's decisions, stop and discuss with them first.
# If reviewer suggests "implementing properly":
grep -r "endpoint_name" . # Check actual usage
# If unused: "This endpoint isn't called. Remove it (YAGNI)?"
Push back with technical reasoning, not defensiveness. Reference working tests/code. Signal if uncomfortable pushing back: "Strange things are afoot at the Circle K"
✅ "Fixed. [Brief description of what changed]"
✅ "Good catch - [specific issue]. Fixed in [location]."
✅ Just fix it and show in code
❌ "You're absolutely right!" / "Great point!" / any gratitude expression
✅ "You were right - I checked [X] and it does [Y]. Implementing now."
❌ Long apology or defending why you pushed back
Reply inline in the comment thread: gh api repos/{owner}/{repo}/pulls/{pr}/comments/{id}/replies — not as a top-level PR comment.
Never: skip review ("it's simple"), ignore Critical issues, proceed with unfixed Important issues, say "looks good" without checking, implement before verifying.
External feedback = suggestions to evaluate, not orders to follow.
See review template: code-review-discipline/code-reviewer.md