From accelerator
Reviews pull requests through multiple quality lenses like architecture and security, compiling analysis with inline comments. Use for thorough PR reviews via number, URL, or current branch.
npx claudepluginhub atomicinnovation/accelerator --plugin acceleratorThis skill is limited to using the following tools:
!`${CLAUDE_PLUGIN_ROOT}/scripts/config-read-context.sh`
Performs thorough pull request reviews with parallel agents for bugs, security issues, guideline compliance, and error handling. Provides confidence-scored feedback and batched GitHub comments.
Reviews GitHub PRs: fetches diff via gh CLI, runs repo-specific checks, launches 3 parallel agents for correctness/conventions/efficiency, validates findings, drafts review.
Share bugs, ideas, or general feedback.
!${CLAUDE_PLUGIN_ROOT}/scripts/config-read-context.sh
!${CLAUDE_PLUGIN_ROOT}/scripts/config-read-skill-context.sh review-pr
!${CLAUDE_PLUGIN_ROOT}/scripts/config-read-agents.sh
If no "Agent Names" section appears above, use these defaults: accelerator:reviewer, accelerator:codebase-locator, accelerator:codebase-analyser, accelerator:codebase-pattern-finder, accelerator:documents-locator, accelerator:documents-analyser, accelerator:web-search-researcher.
!${CLAUDE_PLUGIN_ROOT}/scripts/config-read-review.sh pr
PR reviews directory: !${CLAUDE_PLUGIN_ROOT}/scripts/config-read-path.sh review_prs meta/reviews/prs
Tmp directory: !${CLAUDE_PLUGIN_ROOT}/scripts/config-read-path.sh tmp meta/tmp
IMPORTANT: Wherever {tmp directory} or {pr reviews directory} appears
in the instructions below, substitute the actual resolved path shown above.
Never use /tmp or any other path not shown above.
IMPORTANT: When composing prompts for sub-agents, resolve all {...}
path placeholders to their actual values before passing the prompt —
sub-agents cannot see the bold-label definitions above and have no way to
resolve the placeholders themselves.
You are tasked with reviewing a pull request through multiple quality lenses and then presenting a compiled analysis of the code changes.
When this command is invoked:
I'll help you review a pull request. Please provide:
1. The PR number or URL (or I'll check the current branch)
2. (Optional) Focus areas to emphasise (e.g., "focus on security and
architecture")
Tip: You can invoke this command with arguments:
`/review-pr 123`
`/review-pr 123 focus on security and test coverage`
Then check if the current branch has a PR:
gh pr view --json number,url,title,state 2>/dev/null
If a PR is found on the current branch, offer to review it. If not, wait for the user's input.
| Lens | Lens Skill | Focus |
|---|---|---|
| Architecture | architecture-lens | Modularity, coupling, dependency direction, structural drift |
| Security | security-lens | OWASP Top 10, input validation, auth/authz, secrets, data flows |
| Test Coverage | test-coverage-lens | Coverage adequacy, assertion quality, test pyramid, anti-patterns |
| Code Quality | code-quality-lens | Complexity, design principles, error handling, code smells |
| Standards | standards-lens | Project conventions, API standards, naming, accessibility |
| Usability | usability-lens | Developer experience, API ergonomics, configuration, onboarding |
| Performance | performance-lens | Algorithmic efficiency, resource usage, concurrency, caching |
| Documentation | documentation-lens | Documentation completeness, accuracy, audience fit |
| Database | database-lens | Migration safety, schema design, query correctness, integrity |
| Correctness | correctness-lens | Logical validity, boundary conditions, state management, concurrency |
| Compatibility | compatibility-lens | API contracts, cross-platform, protocol compliance, deps |
| Portability | portability-lens | Environment independence, deployment flexibility, vendor lock |
| Safety | safety-lens | Data loss prevention, operational safety, protective mechanisms |
Get PR metadata:
gh pr view {number} --json number,url,title,state,baseRefName,headRefName
Create temp directory at {tmp directory}/pr-review-{number} (substituting
the actual PR number):
mkdir -p {tmp directory}/pr-review-{number}
Fetch diff, changed files, PR description, and commit context:
gh pr diff {number} > {tmp directory}/pr-review-{number}/diff.patch
gh pr diff {number} --name-only > {tmp directory}/pr-review-{number}/changed-files.txt
gh pr view {number} --json body --jq '.body' > {tmp directory}/pr-review-{number}/pr-description.md
gh pr view {number} --json commits --jq '.commits[].messageHeadline' > {tmp directory}/pr-review-{number}/commits.txt
Read the diff, changed files list, PR description, and commits to understand scope and intent.
Fetch additional metadata for the Reviews API:
gh api repos/{owner}/{repo}/pulls/{number} --jq '.head.sha' > {tmp directory}/pr-review-{number}/head-sha.txt
gh repo view --json owner,name --jq '"\(.owner.login)/\(.name)"' > {tmp directory}/pr-review-{number}/repo-info.txt
Where {owner} and {repo} are extracted from the PR metadata already
fetched in step 1.
Error handling: If any gh command fails, handle these cases:
gh not installed or not authenticated: Inform the user that the gh
CLI is required and suggest running gh auth login to authenticate.gh repo set-default and select the appropriate repository (mirrors the
pattern in /describe-pr).gh repo view fails, instruct the
user to run gh repo set-default and select the appropriate repository.gh pr list --limit 10 and ask the user to select one.diff.patch is empty (e.g., a draft PR with no changes),
inform the user and ask whether to proceed with a review of the PR
description and commits only.Determine which lenses are relevant based on the PR's scope and any user-provided focus arguments.
If the user provided focus arguments:
If no focus arguments were provided, auto-detect relevance:
Take time to think carefully about which lenses apply based on:
Lens selection cap: Select the most relevant lenses for the change under
review. If review configuration is provided above, use the configured
min_lenses and max_lenses values. Otherwise, use the defaults:
{min lenses} to {max lenses} lenses. Apply these prioritisation rules:
Apply this lens selection pipeline in order:
disabled_lenses, remove those from the available set. They are never
selected regardless of auto-detect criteria.core_lenses,
use that list. Otherwise, the core lenses are Architecture, Code Quality,
Test Coverage, and Correctness. Core lenses are included unless the change
is clearly outside their scope.${CLAUDE_PLUGIN_ROOT} lens
path template.max_lenses: if more lenses than the configured maximum pass
selection, rank by relevance and drop the least relevant. Prefer lenses
whose core responsibilities directly overlap with the change's concerns.min_lenses floor: never run fewer than min_lenses unless
the change is trivially scoped.When presenting the lens selection, clearly indicate which lenses are selected and which are skipped, with a brief reason for each skip.
Present lens selection to the user before proceeding:
Based on the PR's scope, I'll review through these lenses:
- Architecture: [reason]
- Security: [reason — or "Skipping: no security-sensitive changes identified"]
- Test Coverage: [reason]
- Code Quality: [reason]
- Standards: [reason — or "Skipping: ..."]
- Usability: [reason — or "Skipping: ..."]
- Performance: [reason — or "Skipping: no performance-sensitive changes identified"]
- Documentation: [reason — or "Skipping: ..."]
- Database: [reason — or "Skipping: no database changes identified"]
- Correctness: [reason]
- Compatibility: [reason — or "Skipping: ..."]
- Portability: [reason — or "Skipping: ..."]
- Safety: [reason — or "Skipping: ..."]
Shall I proceed, or would you like to adjust the selection?
Wait for confirmation before spawning reviewers.
For each selected lens, spawn the {reviewer agent} agent with a prompt that includes paths to the lens skill and output format files. Do NOT read these files yourself — the agent reads them in its own context.
Reminder: In the template below, replace {tmp directory} with the
actual path resolved at the top of this skill before passing the prompt to
the agent.
Compose each agent's prompt following this template:
You are reviewing pull request changes through the [lens name] lens.
## Context
The PR artefacts are in the temp directory at {tmp directory}/pr-review-{number}:
- `diff.patch` — the full diff
- `changed-files.txt` — list of changed file paths
- `pr-description.md` — PR description
- `commits.txt` — commit messages
PR number: [number]
## Analysis Strategy
1. Read your lens skill and output format files (see paths below)
2. Read `diff.patch` and `changed-files.txt` from the temp directory
3. Read `pr-description.md` and `commits.txt` for intent context
4. Explore the codebase to understand the architectural landscape around
the changes
5. Evaluate the changes through your lens, applying each key question
6. Identify beyond-the-diff impact — trace how changes affect consumers
7. Anchor findings to precise diff line numbers (lines must be within
diff hunks)
## Lens
Read the lens skill at the path listed in the Lens Catalogue table in the
review configuration above. If no review configuration is present, use:
${CLAUDE_PLUGIN_ROOT}/skills/review/lenses/[lens]-lens/SKILL.md
## Output Format
Read the output format at: ${CLAUDE_PLUGIN_ROOT}/skills/review/output-formats/pr-review-output-format/SKILL.md
IMPORTANT: Return your analysis as a single JSON code block. Do not include
prose outside the JSON block.
Spawn all selected agents in parallel using the Task tool with
subagent_type: "!${CLAUDE_PLUGIN_ROOT}/scripts/config-read-agent-name.sh reviewer".
IMPORTANT: Wait for ALL review agents to complete before proceeding.
Handling malformed agent output:
If an agent's response is not a clean JSON block, apply this extraction strategy:
json language tag)"major" severity, and include it in the
review summary bodyWhen falling back, warn the user that the agent's output could not be parsed and present the raw agent output in a collapsed form so the user can see what the agent actually found.
Once all reviews are complete:
Parse agent outputs: Extract the JSON block from each agent's response
(see the extraction strategy in Step 3). Collect the summary, strengths,
comments, and general_findings arrays from each.
Aggregate across agents:
comments arrays into a single listgeneral_findings arrays into a single liststrengths arrays into a single listsummary stringsValidate line numbers against the diff: Parse the hunk headers in
diff.patch to build valid line ranges per file. For each @@ header:
@@ -a,b +c,d @@ — lines c through
c+d-1 are valid RIGHT-side linesa through a+b-1 are valid
LEFT-side linescomments list, check that its
path/line/side falls within a valid range for that filegeneral_findings
automatically, preserving all their metadata (severity, lens, title, body)Deduplicate inline comments: Where multiple agents flag the same file, same side, and overlapping or adjacent line range (same path, lines within the configured dedup proximity ({dedup proximity}) of each other), consider merging — but only when the findings address the same underlying concern from different lens perspectives. Spatial proximity alone is not sufficient; the findings must be semantically related.
When merging:
When in doubt, keep comments separate — distinct inline comments are easier to resolve individually on GitHub than a merged comment covering multiple concerns.
Prioritise and cap inline comments:
Determine suggested verdict:
If review configuration provides verdict overrides above, apply those thresholds instead of the defaults below:
pr_request_changes_severity is none, skip this rule (never
suggest REQUEST_CHANGES based on severity)pr_request_changes_severity
(default: critical) exist → suggest REQUEST_CHANGESCOMMENTAPPROVEIdentify cross-cutting themes: Look for findings that appear across multiple lenses — issues flagged by 2+ agents reinforce each other and should be highlighted in the summary. Also identify tradeoffs where different lenses conflict (e.g., security wants more validation, usability wants less friction).
Compose the review summary body (this becomes the body field of the
GitHub review):
## Code Review: #{number} - {title}
**Verdict:** [APPROVE | REQUEST_CHANGES | COMMENT]
[Combined assessment: take each agent's summary and synthesise into 2-3
sentences covering the overall quality of the PR across all lenses]
### Cross-Cutting Themes
[Issues that multiple lenses identified — these deserve the most attention]
- **[Theme]** (flagged by: [lenses]) — [description]
### Tradeoff Analysis
[Where different lenses disagree, present both perspectives]
- **[Quality A] vs [Quality B]**: [description and recommendation]
[Omit either section if there are no cross-cutting themes or tradeoffs]
### Strengths
- ✅ [Aggregated and deduplicated strengths from all agents]
### General Findings
- [emoji] **[Lens]**: [General findings from all agents, sorted by severity]
### Additional Findings
[Only if more than {max inline comments} inline comments were produced and
some were deferred]
- [emoji] `file:line` — [title] ([lens])
---
*Review generated by /review-pr*
Compose each inline comment body: Each comment's body field should
already be self-contained from the agent output. For merged comments,
combine the bodies with a blank line separator and attribute each section
to its lens.
Write the review artifact to {pr reviews directory}/:
Determine the next review number:
mkdir -p {pr reviews directory}
# Glob for existing reviews of this PR
ls {pr reviews directory}/{number}-review-*.md 2>/dev/null
# Extract the highest number, increment by 1. If none exist, use 1.
Write the review document to {pr reviews directory}/{number}-review-{N}.md:
---
date: "{ISO timestamp}"
type: pr-review
skill: review-pr
target: "PR #{number}"
pr_number: {number}
pr_title: "{title}"
review_number: {N}
verdict: {APPROVE | REQUEST_CHANGES | COMMENT}
lenses: [{list of lenses used}]
status: complete
---
{The full review summary from Step 4.8}
## Inline Comments
### `{path}:{line}` — {title}
**Severity**: {severity} | **Confidence**: {confidence} | **Lens**: {lens}
{comment body}
---
### `{path}:{line}` — {title}
...
## Per-Lens Results
### {Lens 1 Name}
**Summary**: {agent summary}
**Strengths**:
{agent strengths}
**Comments**:
{agent comments — each with path, line, severity, confidence, and body}
**General Findings**:
{agent general findings}
### {Lens 2 Name}
...
This review artifact captures the complete analysis. The GitHub review (posted in Step 6) may be a curated subset (capped at ~{max inline comments} inline comments), but the persistent artifact retains everything.
Present a two-part preview showing exactly what will be posted to the PR:
Part 1: Review summary (will become the review's body):
Show the composed summary from Step 4.8 in a markdown code block so the user can see exactly what will be posted.
Part 2: Inline comments (will be attached to specific diff lines):
## Proposed Inline Comments ([count] comments)
### [file path 1]
- Line [N]: [emoji] **[Lens]** — [title]
> [First 1-2 sentences of body as preview]
- Lines [N-M]: [emoji] **[Lens]** — [title]
> [First 1-2 sentences of body as preview]
### [file path 2]
- Line [N]: [emoji] **[Lens]** — [title]
> [First 1-2 sentences of body as preview]
[If comments were deferred due to the ~{max inline comments} cap:]
### Deferred to summary ([count] findings)
- [emoji] [Lens]: [title] — `file:line`
After presenting the preview:
The review is ready. Would you like to:
1. Post the review? (summary + [count] inline comments, verdict: [suggested verdict])
2. Change the verdict? (currently: [suggested verdict])
3. Edit or remove specific inline comments before posting?
4. Discuss any findings in more detail?
5. Re-run specific lenses with adjusted focus?
When the user chooses to post (option 1):
Read the HEAD SHA and repo info from the temp directory at
{tmp directory}/pr-review-{number}/head-sha.txt and
{tmp directory}/pr-review-{number}/repo-info.txt using the Read tool.
Construct the review payload as a JSON object containing:
commit_id: the HEAD SHAbody: the review summary composed in Step 4.8event: the verdict ("COMMENT", "REQUEST_CHANGES", or "APPROVE")comments: array of inline comment objects, each with:
path: file path from the agent's comment
line: line number from the agent's comment
side: side from the agent's comment
body: the self-contained comment body
start_line and start_side: included only if end_line is not null.
For multi-line comments, the agent's fields map to the API's fields
with an inversion (see "Multi-Line Comment API Mapping" in Phase 1):
start_line ← agent's line (the beginning of the range)start_side ← agent's sideline ← agent's end_line (the end of the range)side ← agent's sideExample: agent {line: 10, end_line: 15, side: "RIGHT"} becomes
API {start_line: 10, start_side: "RIGHT", line: 15, side: "RIGHT"}
Write the review payload JSON to
{tmp directory}/pr-review-{number}/review-payload.json, then post the
review:
gh api repos/{owner}/{repo}/pulls/{number}/reviews \
--method POST --input {tmp directory}/pr-review-{number}/review-payload.json
Where {owner}/{repo} are the values read from repo-info.txt.
Confirm success and show the PR URL:
gh pr view {number} --json url --jq '.url'
If the API returns a 422 error (typically an invalid line reference or stale commit):
commit_id (the PR's HEAD has changed since
the review started), re-fetch the HEAD SHA and warn the user that new commits
were pushed. Offer to retry with the updated SHA, noting that line numbers
may have shifted and some comments may now be invalidWhen the user chooses to edit comments (option 3):
When the user changes the verdict (option 2):
Read the diff before doing anything else — you need complete context to select lenses and brief the agents properly
Spawn agents in parallel — the review lenses are independent and should run concurrently for efficiency
Synthesise, don't concatenate — your value is in compiling a balanced view across lenses, identifying themes and tradeoffs, and prioritising actionable recommendations. Don't just paste seven reports together.
Be balanced — highlight strengths alongside concerns. A PR that makes good architectural decisions but has security gaps should get credit for both.
Prioritise by impact — structural issues that are hard to fix later matter more than surface-level concerns. A critical finding from one lens outweighs minor findings from all seven.
Respect tradeoffs — when lenses conflict, present both sides and let the user decide. Don't privilege one quality attribute over another without justification.
Clean up temp directory only at session end — agents may need to re-reference the PR context during follow-up discussion.
The {tmp directory}/pr-review-{number}/ directory contains ephemeral
working data (diff, changed-files, PR description, commits, head SHA,
repo info, review payload JSON) used during the review session. The review
itself (summary, inline comments, per-lens results) is persisted separately
to {pr reviews directory}/{number}-review-{N}.md.
Handle API errors gracefully — if the review post fails due to invalid line references, identify the problematic comments and offer to retry without them rather than failing entirely
Cap inline comments — if agents produce more findings, prioritise critical and major severity. Use the configured max ({max inline comments}). Always include all critical findings even if that exceeds the cap. Move overflow to the summary body. This prevents PR comment spam.
Keep positive feedback in the summary — strengths and good observations go in the review body, never as inline comments. Inline comments are exclusively for actionable findings.
Use emoji severity prefixes consistently — 🔴 critical, 🟡 major,
🔵 minor/suggestion, ✅ strengths. IMPORTANT: Use the actual Unicode
emoji characters (🔴 🟡 🔵 ✅), NOT text shortcodes like :red_circle:,
:yellow_circle:, :blue_circle:, or :white_check_mark:. Shortcodes
are not rendered in markdown and will appear as literal text.
The PR review sits in the development lifecycle alongside other commands:
/create-plan — Create the implementation plan/review-plan — Review and iterate the plan quality/implement-plan — Execute the approved plan/validate-plan — Verify implementation matches the plan/describe-pr — Generate PR description/review-pr — Review the PR through quality lenses (this command)!${CLAUDE_PLUGIN_ROOT}/scripts/config-read-skill-instructions.sh review-pr