PROACTIVELY USED when reviewing a PR, branch, or Jira story. Handles code review against requirements and provides actionable feedback.
Proactively reviews PRs by orchestrating multiple specialized agents to detect issues and provide verified actionable feedback.
/plugin marketplace add emiperez95/cc-toolkit/plugin install athena-pr-reviewer@cc-toolkitThis skill inherits all available tools. When active, it can use any tool Claude has access to.
prompts/code-reviewer.mdprompts/comment-analyzer.mdprompts/error-hunter.mdprompts/simplifier.mdprompts/test-analyzer.mdprompts/type-reviewer.mdprompts/verifier.mdscripts/annotate-diff.shscripts/gather-context.shscripts/run-reviews.shParse user input to identify the PR:
PR 123, #123): Extract number directlyPROJ-123): Run gh pr list --search "PROJ-123" --json number --jq '.[0].number'gh pr view --json number --jq '.number'git branch --show-current | grep -oE '[A-Z]+-[0-9]+'Run the gather-context script which collects all data in parallel:
~/.claude/skills/athena-pr-reviewer/scripts/gather-context.sh ${PR_NUM} ${JIRA_TICKET}
This script:
/tmp/athena-review-${PR_NUM}/${WORK_DIR}/context.md${WORK_DIR}/diff.patchOutput files:
context.md - Combined PR + Jira + guidelines + history datadiff.patch - Full PR diffpr.json - Raw PR metadatajira.json - Raw Jira ticket dataepic.json - Epic context (if linked)guidelines.md - All CLAUDE.md files from repoblame.md - Git blame for changed files (who wrote what, when)prior-comments.md - Comments from past PRs touching same filesBefore running reviews, detect available reviewer agents and let the user select which to use.
Built-in reviewers (always available):
Dynamic reviewers:
Check available subagent_type values in your context for additional reviewers:
athena-pr-reviewer (this skill), data-gathering agents (hermes-pr-courier, heimdall-pr-guardian, etc.)First, show all detected reviewers and ask for confirmation:
I'll run the review with these agents:
**Claude specialists (6):** (built-in, always available)
- comment-analyzer, test-analyzer, error-hunter
- type-reviewer, code-reviewer, simplifier
**External LLMs (2):** (run outside Claude via CLI)
- Gemini, Codex
**Installed agents (N):** (detected in your Claude setup)
- {agent-1}, {agent-2}, ...
- (or "None detected" if empty)
Proceed with all {total} reviewers?
Use AskUserQuestion with:
If "Yes, run all": Proceed to step 4 with all detected reviewers.
If "No, let me choose": Show paginated multi-select UI:
multiSelect: trueParse selection:
Execute all selected reviews simultaneously in a SINGLE message.
In ONE message, run all selected reviewers:
If Gemini or Codex were selected, start with run_in_background: true:
~/.claude/skills/athena-pr-reviewer/scripts/run-reviews.sh ${WORK_DIR}
For each selected built-in specialist, spawn a Task agent:
| Specialist | Prompt File | Output File |
|---|---|---|
| comment-analyzer | prompts/comment-analyzer.md | claude-comments.md |
| test-analyzer | prompts/test-analyzer.md | claude-tests.md |
| error-hunter | prompts/error-hunter.md | claude-errors.md |
| type-reviewer | prompts/type-reviewer.md | claude-types.md |
| code-reviewer | prompts/code-reviewer.md | claude-general.md |
| simplifier | prompts/simplifier.md | claude-simplify.md |
Task: general-purpose
Prompt: "Read ~/.claude/skills/athena-pr-reviewer/prompts/{SPECIALIST}.md for instructions.
Then read ${WORK_DIR}/context.md and ${WORK_DIR}/diff.patch.
Perform the review and use the Write tool to save your findings to: ${WORK_DIR}/reviews/{OUTPUT_FILE}
IMPORTANT: Use the Write tool directly, not Bash with cat/heredoc."
Note: Each Task call returns a task_id. Save these - you will need them in step 4.4.
For each selected dynamic agent, spawn using its own agent type:
Task: {agent-name}
Prompt: "Review the PR for code quality issues.
Context file: ${WORK_DIR}/context.md (contains requirements, PR metadata, guidelines)
Diff file: ${WORK_DIR}/diff.patch (annotated with line numbers)
Use your expertise to identify issues. For each finding include:
- File path and line number (use format from diff annotations)
- Severity: Critical/High/Medium/Low
- Confidence: 0-100
- Description and suggested fix
Use the Write tool to save your review to: ${WORK_DIR}/reviews/{agent-name}.md
IMPORTANT: Use the Write tool directly, not Bash with cat/heredoc."
Note: Each Task call returns a task_id. Save these - you will need them in step 4.4.
STOP. Do NOT proceed until you complete this step.
When you spawned Task agents in steps 4.1-4.3, each returned a task_id. You MUST now:
TaskOutput(task_id: "abc123", block: true)
TaskOutput(task_id: "def456", block: true)
TaskOutput(task_id: "ghi789", block: true)
... one call per spawned agent
FORBIDDEN:
REQUIRED:
block: true on each callOnly after ALL TaskOutput calls have returned, proceed to Step 5.
Read ALL review files from ${WORK_DIR}/reviews/ directory and combine findings.
Possible reviewers (depending on selection):
Confidence Filtering:
Priority Boost Rule: Items flagged by 2+ reviewers get bumped up one severity level.
| Reviewers | Original | Final Severity |
|---|---|---|
| 3+ | High | Critical |
| 2 | High | Critical |
| 3+ | Medium | High |
| 2 | Medium | High |
| 1 | Any | No boost |
Deduplicate similar findings, noting which reviewer(s) flagged each and average confidence.
For each aggregated finding, verify against actual code to filter hallucinations:
${WORK_DIR}/diff.patch to get the actual code~/.claude/skills/athena-pr-reviewer/prompts/verifier.md) to validate each finding${WORK_DIR}/rejected.md with reasonOutput verified findings to ${WORK_DIR}/verified-findings.md
Present combined review to user:
# PR Review: {PR_TITLE} (#{PR_NUM})
## Requirements Status
| Requirement | Status | Notes |
|-------------|--------|-------|
## Action Items (Verified)
### Critical (consensus, verified)
- [ ] file:line - issue - fix [reviewer1 + reviewer2 + reviewer3] (3+, avg 92%) ✓
### High Priority (verified)
- [ ] file:line - issue - fix [reviewer1 + reviewer2] ← boosted (2, avg 85%) ✓
- [ ] file:line - issue - fix [reviewer1] (95%) ✓
### Medium Priority (verified)
- [ ] file:line - issue - fix [reviewer1] (88%) ✓
### Suggestions
- improvements (including PARTIAL findings downgraded from higher severity)
## Rejected Findings
Findings that failed verification are saved to: `${WORK_DIR}/rejected.md`
## Review Sources
[List all .md files found in ${WORK_DIR}/reviews/]
## Recommendation: APPROVE / REQUEST_CHANGES
After presenting the summary, offer to iterate through action items:
Would you like me to walk through any of these issues in detail? I can:
- Explain each issue with more context
- Show the actual code and a potential fix
- Give my opinion on priority and approach
Reply with "yes" to go through them one by one, or pick specific items (e.g., "explain the first critical issue").
If the user accepts, go through items ONE AT A TIME:
For each item:
CRITICAL: After presenting ONE item, STOP and wait for user input.
Ask: "Ready for the next issue? (N remaining)" or let them say "skip", "stop", or ask questions about the current item.
Do NOT present multiple items in a single response.
User: "Review PR 456"
User: "Review CSD-123"
User: "Review this branch"
This skill should be used when the user asks to "create a hookify rule", "write a hook rule", "configure hookify", "add a hookify rule", or needs guidance on hookify rule syntax and patterns.
Create distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.