Confidence-based code review that filters issues to 80+ threshold, eliminating false positives and noise. Reviews implementation against plan or requirements for bugs, quality issues, and project conventions. Use after completing major features, before merging to main, or after each task in multi-step workflows. Do NOT use for quick fixes, single-line changes, or when you need immediate feedback - the thorough review adds overhead best reserved for significant changes.
/plugin marketplace add jrc1883/popkit-claude/plugin install popkit@popkit-marketplaceThis skill inherits all available tools. When active, it can use any tool Claude has access to.
checklists/review-checklist.mdstandards/review-output.mdReview code for bugs, quality issues, and project conventions with confidence-based filtering. Only reports HIGH confidence issues to reduce noise.
Core principle: Review early, review often. Filter out false positives.
Mandatory:
Optional but valuable:
Each identified issue receives a confidence score (0-100):
| Score | Meaning | Action |
|---|---|---|
| 0 | Not a real problem | Ignore |
| 25 | Possibly valid | Ignore |
| 50 | Moderately confident | Note for reference |
| 75 | Highly confident | Report |
| 100 | Absolutely certain | Report as critical |
Threshold: 80+ - Only issues scoring 80 or higher are reported.
## Code Review: [Feature/PR Name]
### Summary
[1-2 sentences on overall quality]
### Critical Issues (Must Fix)
_Issues with confidence 90+_
#### Issue 1: [Title]
- **File**: `path/to/file.ts:line`
- **Confidence**: 95/100
- **Category**: Bug/Correctness
- **Description**: What's wrong
- **Fix**: How to fix it
### Important Issues (Should Fix)
_Issues with confidence 80-89_
#### Issue 2: [Title]
- **File**: `path/to/file.ts:line`
- **Confidence**: 82/100
- **Category**: Conventions
- **Description**: What's wrong
- **Fix**: How to fix it
### Assessment
**Ready to merge?** Yes / No / With fixes
**Reasoning**: [1-2 sentences explaining the assessment]
### Quality Score: [X/10]
1. Get git SHAs:
BASE_SHA=$(git rev-parse HEAD~1) # or origin/main
HEAD_SHA=$(git rev-parse HEAD)
2. Dispatch code-reviewer subagent:
3. Act on feedback:
Launch 3 code-reviewer agents in parallel with different focuses:
Consolidate findings and filter by confidence threshold.
Never:
If reviewer wrong:
"Ask what you want to do" - After presenting issues, ask the user how to proceed rather than making assumptions.
This section covers how to respond to review feedback, not how to give it.
Feedback Received → Read ALL → Verify → Evaluate → Implement → Respond
Step 1: Read ALL Comments Before Implementing ANY
Items may be related. Partial understanding = wrong implementation.
Step 2: Verify Against Actual Code
Don't assume the reviewer is right. Check:
For each suggestion:
- Does this code path actually exist?
- Does the suggested fix compile/work?
- Will it break existing functionality?
- Is the edge case real or hypothetical?
Step 3: Evaluate Technical Soundness
Push back on technically questionable suggestions:
| Reviewer Says | Your Response |
|---|---|
| "This could fail if..." (hypothetical) | "Is this edge case actually reachable? Show me the path." |
| "Use pattern X instead" | "Does X fit our architecture? What's the trade-off?" |
| "This is inefficient" | "Is this a hot path? Premature optimization?" |
| "Add validation for Y" | "Is Y possible given our type system?" |
Disagree with technical reasoning, not emotion. If you're right, show evidence.
Step 4: Implement Strategically
Order matters:
One commit per logical change. Makes it easier for re-review.
Step 5: Respond to Each Comment
| Situation | Response |
|---|---|
| Fixed the issue | "Fixed in [commit]" or just resolve the comment |
| Disagree | Technical reasoning why, ask for their perspective |
| Need clarification | Ask specific question, don't guess |
| Won't fix | Explain why (tech debt ticket? out of scope?) |
Performative Agreement:
BAD: "Great point! You're absolutely right! I'll fix that immediately!"
GOOD: "Fixed" or "Good catch" or just the fix itself
Actions demonstrate engagement better than words.
Blind Implementation:
BAD: Immediately implementing every suggestion without verification
GOOD: Verify suggestion makes sense for YOUR codebase, then implement
Defensive Reactions:
BAD: "Well actually, I wrote it this way because..."
GOOD: "Here's why I chose this approach: [technical reason]. Does that change your recommendation?"
It happens. Handle professionally:
Don't:
Stop if you're:
pop-test-driven-development)pop-systematic-debuggingThis skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.