From review-squad
Dispatches parallel expert reviewer subagents with personas to audit projects for pre-launch, post-refactor, inherited codebase, or health checks. Generates severity-ranked reports.
npx claudepluginhub 2389-research/claude-plugins --plugin review-squadThis skill uses the workspace's default tool permissions.
Dispatch a panel of expert reviewer subagents in parallel to audit a project. Each agent adopts a specialist persona and reviews independently. Results are consolidated into a single severity-ranked report that feeds into an implementation plan.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Dispatch a panel of expert reviewer subagents in parallel to audit a project. Each agent adopts a specialist persona and reviews independently. Results are consolidated into a single severity-ranked report that feeds into an implementation plan.
digraph expert_review {
rankdir=TB;
"User requests review" -> "Identify project type";
"Identify project type" -> "Present default panel for that type";
"Present default panel for that type" -> "Suggest stack-specific additions";
"Suggest stack-specific additions" -> "Ask: add, remove, or adjust?";
"Ask: add, remove, or adjust?" -> "Finalize panel";
"Finalize panel" -> "Dispatch all agents in parallel (run_in_background)";
"Dispatch all agents in parallel (run_in_background)" -> "As each completes, note key finding";
"As each completes, note key finding" -> "All done?";
"All done?" -> "As each completes, note key finding" [label="no"];
"All done?" -> "Consolidate into severity-ranked table" [label="yes"];
"Consolidate into severity-ranked table" -> "Present full report";
"Present full report" -> "Offer to write implementation plan for fixes";
}
| # | Expert | Focus Areas |
|---|---|---|
| 1 | SEO Expert | Meta tags, heading hierarchy, sitemap, robots.txt, URL structure, RSS, structured data |
| 2 | Accessibility Expert | Semantic HTML, skip nav, ARIA, color contrast, keyboard nav, motion/animation |
| 3 | Mobile UX Expert | Viewport, responsive CSS, touch targets (44x44px min), font sizes, overflow |
| 4 | Copy Editor | Spelling, grammar, tone consistency across all templates and content |
| 5 | Performance Expert | CSS/JS delivery, image optimization, fonts, caching, build config |
| 6 | Security Reviewer | Headers, XSS vectors, sensitive data exposure, link security, CORS |
| 7 | Social/Meta Tags Specialist | OpenGraph, Twitter cards, favicon, canonical URLs, share previews |
| 8 | Web Standards Expert | HTML validation, correct element usage, spec compliance, ARIA misuse |
| # | Expert | Focus Areas |
|---|---|---|
| 1 | API Design Reviewer | RESTful conventions, naming consistency, versioning, pagination, error responses |
| 2 | Security Reviewer | Auth/authz, input validation, rate limiting, OWASP API top 10 |
| 3 | Performance Reviewer | Query efficiency, N+1 problems, caching strategy, payload sizes |
| 4 | Documentation Reviewer | OpenAPI/Swagger completeness, example accuracy, error documentation |
| 5 | Reliability Reviewer | Error handling, timeouts, retries, circuit breakers, graceful degradation |
| 6 | Data Model Reviewer | Schema design, migrations, indexes, constraints, data integrity |
| # | Expert | Focus Areas |
|---|---|---|
| 1 | UX Reviewer | Navigation patterns, gesture handling, platform conventions, onboarding |
| 2 | Accessibility Reviewer | VoiceOver/TalkBack, dynamic type, contrast, touch targets |
| 3 | Performance Reviewer | Startup time, memory, battery, network efficiency, image handling |
| 4 | Security Reviewer | Data storage, keychain/keystore usage, certificate pinning, auth flows |
| 5 | Store Compliance Reviewer | App Store/Play Store guidelines, permissions justification, privacy labels |
| 6 | Copy Editor | UI text, error messages, onboarding copy, localization readiness |
| # | Expert | Focus Areas |
|---|---|---|
| 1 | UX/Ergonomics Reviewer | Flag naming, help text, output formatting, progressive disclosure |
| 2 | Error Handling Reviewer | Error messages, exit codes, edge cases, graceful failures |
| 3 | Compatibility Reviewer | Shell compatibility, OS support, path handling, encoding |
| 4 | Security Reviewer | Input sanitization, credential handling, file permissions |
| 5 | Documentation Reviewer | Man page / --help completeness, README, examples |
Suggest project-specific additions. After presenting the default panel, analyze the tech stack and suggest reviewers that cover gaps. Examples:
The user decides whether to add them — but always suggest what's relevant.
Every reviewer agent prompt MUST follow this structure:
You are a [ROLE] reviewing a [PROJECT TYPE] [before launch / after refactor / etc.].
Do NOT write any code — only research and report findings.
If browser MCP tools are available, use them to inspect the running site at [URL].
The project is at [PATH]. [Brief description with key tech details].
Review the following and report issues ranked by severity
(critical, important, minor):
1. [Area] — [What to check, where to look]
2. [Area] — [What to check, where to look]
...
10. [Area] — [What to check, where to look]
Check [key directories]. Report a prioritized list of findings.
Critical elements:
Use the Agent tool with run_in_background: true for ALL reviewers. Dispatch all in a single message block for maximum parallelism.
Agent(description="SEO expert site review", subagent_type="general-purpose", run_in_background=true, prompt="...")
Agent(description="Accessibility expert review", subagent_type="general-purpose", run_in_background=true, prompt="...")
...all agents in one message block...
As each agent completes, briefly note the headline finding for the user. Wait until all are done before the full consolidation.
Compile all findings into a single table, deduplicated and ranked:
## [Review Type]: [Project Name]
### CRITICAL (must fix)
| # | Issue | Source |
|---|-------|--------|
| 1 | **[Issue description]** ([specific detail]) | [Which reviewer] |
### IMPORTANT (should fix soon)
| # | Issue | Source |
|---|-------|--------|
### MINOR (backlog)
| # | Issue | Source |
|---|-------|--------|
### DEFERRED (noted, not blocking)
| # | Issue | Source | Reason |
|---|-------|--------|--------|
Cross-referencing: When multiple reviewers flag the same issue, combine them into one row and list all sources (e.g., Standards, A11y, Security).
run_in_background