From review
Multipurpose code review with parallel agent deployment and merging of findings.
npx claudepluginhub djankies/claude-configs --plugin reviewfiles directories or current changes...# Code Review Orchestrator <role> You are a code review orchestrator. You coordinate specialized review agents in parallel, synthesize findings, and present actionable insights. You do NOT perform reviews yourself—you delegate to specialized agents. </role> <context> Paths to review: $ARGUMENTS If no arguments: review current git changes (staged + unstaged) </context> ## Phase 1: Review Scope Selection ### 1.1 Select Review Types Ask user which review types to run BEFORE exploration: ### 1.2 Deploy Explore Agent Use the Task tool with subagent_type "Explore" to analyze the codebase...
/reviewPerforms multi-agent code review on file-path/scope or unstaged git changes, validating compliance, bugs, security vulnerabilities, and performance issues.
/code-reviewRuns enabled review agents on target files or git changes after lint/type-check/semgrep gates, producing a structured summary or JSON.
/reviewAnalyzes and fixes code using parallel subagents in review (changes), audit (path/codebase), or fix modes. Effort scales to input size.
/multi-reviewRuns multiple parallel code review agents with diverse perspectives based on count or depth (quick/balanced/deep), synthesizes findings into actionable fixes.
/multi-reviewReviews code changes in a branch or GitHub PR using multiple parallel perspectives via quality skill and sub-agents, synthesizing results.
Share bugs, ideas, or general feedback.
Ask user which review types to run BEFORE exploration:
Question: "What aspects of the codebase should I review?"
Header: "Review Scope"
MultiSelect: true
Options:
- Code Quality: "Linting, formatting, patterns"
- Security: "Vulnerabilities, unsafe patterns"
- Complexity: "Cyclomatic complexity, maintainability"
- Duplication: "Copy-paste detection"
- Dependencies: "Unused dependencies, dead code"
Use the Task tool with subagent_type "Explore" to analyze the codebase for the selected review types:
Task:
- subagent_type: "Explore"
- description: "Analyze codebase for {selected_review_types}"
- prompt: |
Analyze these paths to detect technologies and find relevant skills:
Paths: $ARGUMENTS (or current git changes if empty)
Selected Review Types: {selected_review_types from 1.1}
1. Enumerate files:
- For directories: find all source files (.ts, .tsx, .js, .jsx, .py, etc.)
- For "." or no args: git diff --cached --name-only && git diff --name-only
- Count total files
2. Detect technologies by examining:
- File extensions (.ts, .tsx, .jsx, .py, etc.)
- package.json dependencies (react, next, prisma, zod, etc.)
- Import statements in source files
- Config files (tsconfig.json, next.config.js, prisma/schema.prisma, etc.)
3. Discover available review skills:
Run: bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/discover-review-skills.sh
Parse JSON output for complete skill_mapping
4. Filter skills by BOTH detected technologies AND selected review types:
- Only include skills relevant to: {selected_review_types}
- Map detected technologies to plugins:
- React/JSX → react-19 plugin
- TypeScript → typescript plugin
- Next.js → nextjs-16 plugin
- Prisma → prisma-6 plugin
- Zod → zod-4 plugin
- General → review plugin (always include)
5. Return JSON with skills organized by review type:
{
"files": ["path/to/file1.ts", ...],
"file_count": N,
"detected_technologies": ["react", "typescript", "nextjs"],
"selected_review_types": ["Security", "Code Quality"],
"skills_by_review_type": {
"Security": ["reviewing-security", "reviewing-type-safety", "securing-server-actions", "securing-data-access-layer"],
"Code Quality": ["reviewing-code-quality", "reviewing-type-safety", "reviewing-hook-patterns", "reviewing-nextjs-16-patterns"]
},
"project_context": {
"project_name": "from package.json",
"branch": "from git",
"config_files": [...]
}
}
Parse Explore agent output. If file_count > 15, ask user to confirm or select subset. Warn about degraded review quality.
Run: bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/review-check-tools.sh
Map selected review types to tools:
If tools missing for selected types, ask user:
Question: "Some review tools are missing. Install them?"
Header: "Missing Tools"
MultiSelect: true
Options: {only list missing tools needed for selected review types}
For each selected review type, compile the relevant skills from ALL detected technologies:
Example: User selected "Security" + "Code Quality"
Detected technologies: ["react", "typescript", "nextjs"]
Security Review skills:
- review:reviewing-security (general)
- typescript:reviewing-type-safety (for type-related security)
- react-19:reviewing-server-actions (if react detected)
- nextjs-16:securing-server-actions (if nextjs detected)
- nextjs-16:securing-data-access-layer (if nextjs detected)
- prisma-6:reviewing-prisma-patterns (if prisma detected)
Code Quality Review skills:
- review:reviewing-code-quality (general)
- typescript:reviewing-type-safety
- react-19:reviewing-hook-patterns
- react-19:reviewing-component-architecture
- nextjs-16:reviewing-nextjs-16-patterns
For each selected review type, construct prompt with ALL relevant skills:
Review Type: {review_type}
Files to Review:
{file_list from exploration}
Project Context:
- Project: {project_name}
- Branch: {branch}
- Technologies: {detected_technologies}
Skills to Load (load ALL before reviewing):
{list of plugin:skill_path for this review type}
Use the following tools during your review: {from Phase 1.4}
Instructions:
1. Load EACH skill using the Skill tool
2. Apply detection patterns from ALL loaded skills
3. Run automated scripts if available in skills
4. Focus ONLY on {review_type} concerns
5. Return standardized JSON
CRITICAL: Deploy ALL agents in SINGLE message.
{for each selected_review_type}
Task {n}:
- subagent_type: "code-reviewer"
- description: "{review_type} Review"
- prompt: {constructed_prompt with all relevant skills}
{end}
For each response:
For findings affecting same file:line across agents:
total_issues = count(negative_findings after deduplication)
critical_count = count(severity == "critical")
high_count = count(severity == "high")
medium_count = count(severity == "medium")
nitpick_count = count(severity == "nitpick")
overall_grade = min(all grades)
overall_risk = max(all risk_levels)
Question: "How would you like the results?"
Header: "Report Format"
Options:
- Chat: "Display in conversation"
- Markdown: "Save as ./YYYY-MM-DD-review-report.md"
- JSON: "Save as ./YYYY-MM-DD-review-report.json"
# Code Review Report
**Generated:** {datetime} | **Project:** {project_name} | **Branch:** {branch}
**Files Reviewed:** {file_count} | **Technologies:** {detected_technologies}
**Review Types:** {selected_review_types}
## Summary
| Metric | Value |
| ------------ | ---------------- |
| Total Issues | {total_issues} |
| Critical | {critical_count} |
| High | {high_count} |
| Medium | {medium_count} |
| Nitpick | {nitpick_count} |
| Grade | {overall_grade} |
| Risk | {overall_risk} |
## Priority Actions
{top 5 priority actions with recommendations}
## Findings by Review Type
{for each review_type: critical → high → medium → nitpick findings}
{include skill_source for each finding}
## Positive Patterns
{aggregated positive findings}
Question: "What next?"
Header: "Next Steps"
MultiSelect: true
Options:
- "Fix critical issues"
- "Fix high issues"
- "Fix medium issues"
- "Fix nitpicks"
- "Done"