Use when requesting or receiving code reviews - covers dispatching review subagent, handling feedback with technical rigor, and avoiding performative agreement
Dispatches a code-reviewer subagent to analyze your changes against requirements, then handles feedback with technical rigor—verifying claims before implementing and pushing back with reasoning instead of performative agreement.
/plugin marketplace add pproenca/dot-claude-old/plugin install review@dot-claudeThis skill is limited to using the following tools:
Complete workflow for requesting code reviews and handling feedback professionally.
Core principle: Technical rigor over social comfort. Verify before implementing.
Dispatch review:code-reviewer subagent to catch issues before they cascade.
Mandatory:
Optional but valuable:
1. Get git SHAs:
BASE_SHA=$(git rev-parse HEAD~1) # or origin/main
HEAD_SHA=$(git rev-parse HEAD)
2. Dispatch code-reviewer subagent:
Use Task tool with review:code-reviewer type. See dispatch template at plugins/methodology/review/templates/code-reviewer-dispatch.md.
Placeholders:
{WHAT_WAS_IMPLEMENTED} - What you just built{PLAN_OR_REQUIREMENTS} - What it should do{BASE_SHA} - Starting commit{HEAD_SHA} - Ending commit{DESCRIPTION} - Brief summaryExample dispatch:
Task tool (review:code-reviewer):
description: "Review Task 2 implementation"
prompt: |
Review the implementation against requirements.
## Context
- **What Was Implemented:** Verification and repair functions for index
- **Requirements/Plan:** Task 2 from docs/plans/deployment-plan.md
- **Description:** Added verifyIndex() and repairIndex() with 4 issue types
## Git Range
- **Base:** a7981ec
- **Head:** 3df7661
First run: git diff --stat a7981ec..3df7661
Then review against plugins/methodology/review/references/code-review-standards.md
3. Act on feedback:
Code review requires technical evaluation, not emotional performance.
WHEN receiving code review feedback:
1. READ: Complete feedback without reacting
2. UNDERSTAND: Restate requirement in own words (or ask)
3. VERIFY: Check against codebase reality
4. EVALUATE: Technically sound for THIS codebase?
5. RESPOND: Technical acknowledgment or reasoned pushback
6. IMPLEMENT: One item at a time, test each
NEVER:
INSTEAD:
IF any item is unclear:
STOP - do not implement anything yet
ASK for clarification on unclear items
WHY: Items may be related. Partial understanding = wrong implementation.
Push back when:
How to push back:
BEFORE implementing:
1. Check: Technically correct for THIS codebase?
2. Check: Breaks existing functionality?
3. Check: Reason for current implementation?
4. Check: Works on all platforms/versions?
5. Check: Does reviewer understand full context?
IF suggestion seems wrong:
Push back with technical reasoning
IF conflicts with human partner's prior decisions:
Stop and discuss with human partner first
IF reviewer suggests "implementing properly":
grep codebase for actual usage
IF unused: "This endpoint isn't called. Remove it (YAGNI)?"
IF used: Then implement properly
When feedback IS correct:
✅ "Fixed. [Brief description of what changed]"
✅ "Good catch - [specific issue]. Fixed in [location]."
✅ [Just fix it and show in the code]
❌ "You're absolutely right!"
❌ "Great point!"
❌ ANY gratitude expression
Why no thanks: Actions speak. Just fix it. The code itself shows you heard the feedback.
If you pushed back and were wrong:
✅ "You were right - I checked [X] and it does [Y]. Implementing now."
✅ "Verified this and you're correct. My initial understanding was wrong because [reason]. Fixing."
❌ Long apology
❌ Defending why you pushed back
❌ Over-explaining
State the correction factually and move on.
If you can't verify the feedback:
"I can't verify this without [X]. Should I [investigate/ask/proceed]?"
Don't assume the reviewer is right or wrong - state the limitation and ask for direction.
If you feel uncomfortable pushing back on technically incorrect feedback but know it's wrong:
Signal phrase: "Strange things are afoot at the Circle K"
This signals to your human partner that you have technical concerns but are struggling to articulate pushback. They can then dig deeper.
FOR multi-item feedback:
1. Clarify anything unclear FIRST
2. Then implement in this order:
- Blocking issues (breaks, security)
- Simple fixes (typos, imports)
- Complex fixes (refactoring, logic)
3. Test each fix individually
4. Verify no regressions
| Mistake | Fix |
|---|---|
| Skip review because "it's simple" | Review early, review often |
| Performative agreement | State requirement or just act |
| Blind implementation | Verify against codebase first |
| Batch without testing | One at a time, test each |
| Avoiding pushback | Technical correctness > comfort |
| Partial implementation | Clarify all items first |
Tool usage:
Task tool with review:code-reviewer subagent typeCalled by:
Pairs with:
Use when working with Payload CMS projects (payload.config.ts, collections, fields, hooks, access control, Payload API). Use when debugging validation errors, security issues, relationship queries, transactions, or hook behavior.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.