Multi-perspective review using specialized judges with debate and consensus building. Triggers on "critique", "review", "multi-perspective review", "challenge this".
From cc-setupnpx claudepluginhub krzemienski/cc-setup --plugin cc-setupThis skill uses the workspace's default tool permissions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
Report-only review using three parallel judges. No automatic fixes — findings are for user consideration.
Identify what to review:
Announce scope before proceeding:
Review Scope:
- Request: [summary]
- Files: [list]
- Approach: [brief description]
Starting multi-agent review...
Spawn three agents in parallel via Task tool. Each works independently.
Review alignment with original requirements. For each requirement: met / partial / missed.
Output:
### Requirements Score: X/10
Coverage:
- [met]
- [partial] — [why]
- [missed] — [why]
Gaps: [item] — Severity: Critical/High/Medium/Low
Scope creep: [item] — [good or problematic?]
Evaluate the technical approach and design decisions against alternatives.
Output:
### Architecture Score: X/10
Approach: [description]
Strengths: [list]
Weaknesses: [list]
Alternatives considered:
1. [name] — Pros/Cons — Better/Worse/Equivalent
2. [name] — Pros/Cons — Better/Worse/Equivalent
Anti-patterns: [item] — Severity
Scalability/Maintainability: [assessment]
Assess implementation quality and refactoring opportunities.
Output:
### Code Quality Score: X/10
Strengths: [list with examples]
Issues:
- [issue] — Severity: Critical/High/Medium/Low — [file:line]
Refactorings (prioritized):
1. [name] — Priority: High/Medium/Low — Effort: S/M/L
Before: [snippet]
After: [snippet]
Code smells: [item at location — impact]
After all three reports:
# Critique Report
## Summary
[2-3 sentences]
**Overall Score**: X/10
| Judge | Score | Key Finding |
|-------|-------|-------------|
| Requirements Validator | X/10 | [one-line] |
| Solution Architect | X/10 | [one-line] |
| Code Quality Reviewer | X/10 | [one-line] |
## Strengths
1. **[Strength]** — [evidence] — Source: [judge(s)]
## Issues (Critical / High / Medium / Low)
- **[Issue]** — [file:line] — [impact] — Recommendation: [action]
## Requirements
Met: X/Y | Coverage: Z% | [table with status per requirement]
## Architecture
Chosen: [description] | Alternatives: [why chosen wins/loses vs each]
Recommendation: [keep / switch because...]
## Refactorings
1. **[Name]** — Priority: High/Med/Low — Effort: S/M/L — Benefit: [x]
## Consensus / Debate
Agreed: [item]
Disputed: **[Topic]** — [Judge A] vs [Judge B] — Resolution: [outcome or "reasonable disagreement"]
## Action Items
Must Do: - [ ] [Critical action]
Should Do: - [ ] [High priority action]
Could Do: - [ ] [Medium priority action]
## Verdict
[Ready to ship | Needs improvements | Requires rework]