Orchestrate comprehensive code review by running specialized review skills in parallel as forked subagents. Use when: (1) User explicitly requests code review, (2) Before creating a pull request, (3) After completing a feature or major implementation, (4) When user indicates they're done with changes and ready to submit/merge. This skill coordinates multiple review subagents (code quality, security, performance, testing, documentation, clarity) to provide thorough feedback.
npx claudepluginhub bennettaur/llmenv --plugin code-review-team-coreThis skill uses the workspace's default tool permissions.
This skill coordinates parallel execution of specialized review skills to provide comprehensive code feedback. Each reviewer skill uses `context: fork` to run in an isolated subagent, analyzing the code independently and returning findings.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Calculates TAM/SAM/SOM using top-down, bottom-up, and value theory methodologies for market sizing, revenue estimation, and startup validation.
This skill coordinates parallel execution of specialized review skills to provide comprehensive code feedback. Each reviewer skill uses context: fork to run in an isolated subagent, analyzing the code independently and returning findings.
Each reviewer skill below has context: fork in its frontmatter, which means invoking it via the Skill tool automatically spawns a dedicated subagent. The subagent receives the skill's instructions as its prompt, runs its analysis in isolation (no conversation history), and returns its findings.
Based on the context, select relevant skills to invoke:
Always run:
superpowers:code-reviewer - (If it's available) Reviews against original plan and coding standardscode-clarity-reviewer - Reviews code readability, comments, and beginner-friendlinesssecurity-privacy-reviewer - Identifies security vulnerabilities and privacy risksscope-drift-reviewer - Detects changes that drift from the original goal or promptConditionally run based on changes:
code-best-practices-reviewer - Run if code files changed (detects tech stack and applies best practices hierarchy)performance-optimizer - Run if performance-sensitive code changed (database queries, loops, API calls, data processing)test-quality-enforcer - Run if implementation code changed (skip for docs-only, config-only changes)documentation-updater - Run if feature changes, API changes, or behavior modifications occurreddead-code-cleaner - Run if implementation code changed, especially after refactoring or feature completionllm-usage-security-reviewer - Run if code involves LLM API calls, prompt construction, agent frameworks, or AI/ML integration (imports from anthropic, openai, langchain, llamaindex, etc.)Identify changed files
git status
git diff --name-only origin/main...HEAD
Determine skill set Based on changed files:
Launch skills in parallel
Use a SINGLE message with multiple Skill tool calls to invoke all selected reviewer skills simultaneously. Each skill with context: fork will automatically spawn its own subagent.
Example structure:
[Single message containing:]
- Skill tool call for superpowers:code-reviewer
- Skill tool call for code-clarity-reviewer
- Skill tool call for security-privacy-reviewer
- Skill tool call for scope-drift-reviewer
- Skill tool call for code-best-practices-reviewer (if applicable)
- Skill tool call for performance-optimizer (if applicable)
- Skill tool call for test-quality-enforcer (if applicable)
- Skill tool call for documentation-updater (if applicable)
- Skill tool call for dead-code-cleaner (if applicable)
- Skill tool call for llm-usage-security-reviewer (if applicable)
Collect and synthesize feedback After all subagents complete:
Consolidated Report Output Format
Use this structure exactly. Omit any severity section that has zero issues.
# Code Review Summary
**Files reviewed:** <count>
**Issues found:** <total count>
**Reviewed by:** <comma-separated list of all skills that ran>
---
## Blocking (must fix before merge)
**#1 — <Short issue title>**
<Description of the issue: what's wrong, where it occurs (file:line if possible), and why it matters.>
_Found by: <skill-name>, <skill-name>_
**#2 — <Short issue title>**
...
---
## High (strongly recommended)
**#3 — <Short issue title>**
<Description of the issue.>
_Found by: <skill-name>_
---
## Medium (recommended)
**#4 — <Short issue title>**
<Description of the issue.>
_Found by: <skill-name>, <skill-name>_
---
## Low (nice to have)
**#5 — <Short issue title>**
<Description of the issue.>
_Found by: <skill-name>_
Severity classification guide:
Follow-up actions If blocking issues found:
If only improvements suggested:
superpowers:code-reviewer: Requires implementation plan context. If no plan exists, skip or use general coding standards.
code-clarity-reviewer: Focus on whether code tells a story and is accessible to team members.
security-privacy-reviewer: Prioritize user data handling, authentication/authorization, input validation, and logging.
code-best-practices-reviewer: Detects the tech stack and applies best practices in priority order: codebase conventions, framework patterns, language standards, then general engineering principles.
performance-optimizer: Look for N+1 queries, inefficient algorithms, unnecessary re-renders, and caching opportunities.
test-quality-enforcer: Verify coverage of new/changed code, edge cases, and error conditions.
documentation-updater: Check README, API docs, inline docs, and migration guides for accuracy.
dead-code-cleaner: Identify unused code, dead functions, orphaned tests, and cleanup opportunities in current changes.
scope-drift-reviewer: Evaluate whether all changes serve the original goal. Requires the original prompt/plan as context — pass the user's original request and any implementation plan so the subagent can assess drift. Flags changes classified as "Beneficial but Unrelated" or "Unnecessary Drift".
llm-usage-security-reviewer: Focus on LLM API call sites, prompt construction, agent loop configuration, and tool definitions. Prioritize code paths where user-provided input flows into LLM prompts.
When user says "Review my authentication implementation":