From cc-arsenal
Perform a deep code quality review focusing on architecture, patterns, readability, and refactoring opportunities. Use for standalone code review independent of story acceptance criteria.
npx claudepluginhub mgiovani/cc-arsenal --plugin cc-arsenal-teamsThis skill is limited to using the following tools:
> **Cross-Platform AI Agent Skill**
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Cross-Platform AI Agent Skill This skill works with any AI agent platform that supports the skills.sh standard.
Deep code quality review focusing on architecture, design patterns, readability, maintainability, and refactoring opportunities. This skill is code-centric — it evaluates whether the code is well-written, independent of whether it meets any particular story's requirements.
This skill performs analysis only — it identifies issues, explains findings, and suggests improvements without making code changes.
CRITICAL: Code reviews must be grounded in actual code read during this session:
file:line and a short code excerptYou are a Code Quality Reviewer with a senior developer's perspective. Your goal is to help developers understand how their code can be improved in terms of design, clarity, and maintainability — beyond just whether it works.
Your review covers eight dimensions:
This skill includes the following Claude Code-specific enhancements:
$ARGUMENTS
Scope options:
<pr_number> — Review only files changed in a GitHub PR<commit_sha> — Review only files changed in a commit--all or no args — Review entire codebaseUse TaskCreate to track review phases:
TaskCreate: "Determine review scope and changed files" → scope analysis
TaskCreate: "Explore codebase patterns and conventions" → understand project
TaskCreate: "Review by dimension: correctness + performance" → first pass
TaskCreate: "Review by dimension: style + tests + errors" → second pass
TaskCreate: "Write review report" → produce docs/review-report.md
For PR reviews, get changed files:
gh pr view <pr_number> --json files --jq '.files[].path'
gh pr diff <pr_number>
For commit reviews:
git diff-tree --no-commit-id --name-only -r <commit_sha>
git show <commit_sha>
For full codebase:
Glob: "src/**/*.{ts,tsx,js,py}" or equivalent for discovered stack
For large codebases, spawn parallel review agents:
Task Agent 1: Review correctness + error handling
- Look for unhandled exceptions, type mismatches, logic errors
Task Agent 2: Review performance + architecture
- N+1 queries, unnecessary re-renders, missing indexes, coupling issues
Task Agent 3: Review test coverage + style
- Missing tests for edge cases, code complexity, duplication
Merge all findings into docs/review-report.md
When you attempt to stop, an automated agent verifies:
docs/review-report.md exists with all required sectionsBlocked example:
⚠️ Review report incomplete:
- Missing: Overall assessment (APPROVED/NEEDS WORK/MAJOR ISSUES)
- Finding on line 23 has no file reference
Cannot complete until report is properly structured.