Comprehensive code review guidance for quality, performance, and architecture across all programming languages. Use when (1) User explicitly requests code review, (2) After writing significant code changes, (3) Before commits/PRs, (4) Reviewing existing codebases, (5) Analyzing code quality, (6) Detecting performance issues, (7) Identifying architectural problems, (8) Finding code smells. Provides automated analysis scripts and manual review checklists for thorough code evaluation.
From devsnpx claudepluginhub aaronbassett/agent-foundry --plugin devsThis skill uses the workspace's default tool permissions.
references/code-smells.mdreferences/review-checklist.mdscripts/analyze_complexity.pyscripts/detect_code_smells.pyscripts/generate_review_report.pyscripts/review_code.shSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Implements structured self-debugging workflow for AI agent failures: capture errors, diagnose patterns like loops or context overflow, apply contained recoveries, and generate introspection reports.
Comprehensive code review for quality, performance, and architecture across all languages.
Run complete analysis:
./scripts/review_code.sh /path/to/code
This runs:
Use comprehensive checklist:
Complexity analysis:
./scripts/analyze_complexity.py /path/to/code
Code smells:
./scripts/detect_code_smells.py /path/to/code
Trigger this skill when:
review_code.sh - Complete review orchestrator
analyze_complexity.py - Complexity metrics
detect_code_smells.py - Code smell detection
generate_review_report.py - Report generation
review-checklist.md - Complete review checklist
code-smells.md - Code smell catalog
# Run automated analysis
./scripts/review_code.sh /path/to/project
# Review output
cat .code-review-output/REVIEW.md
# Manual checklist review
# Consult references/review-checklist.md
# Address findings
# Refactor based on recommendations
# Analyze complexity
./scripts/analyze_complexity.py /path/to/code
# Identify complex functions
# Refactor functions with complexity >10
Cyclomatic Complexity:
Function Length:
Nesting Depth:
High - Address immediately:
Medium - Address soon:
Low - Address when convenient:
JavaScript/TypeScript:
Python:
Rust:
For languages not automatically detected:
Add to GitHub Actions:
- name: Code Review
run: |
./scripts/review_code.sh .
# Fail if high-severity issues found
#!/bin/bash
./scripts/review_code.sh . --quick
Most linters integrate with VS Code, IntelliJ, etc.
For specific issues:
After review:
# Code Review Report
## Executive Summary
- Total Functions Analyzed: 127
- Code Smells Detected: 23
- High Severity: 3
- Medium Severity: 15
- Low Severity: 5
## Complexity Analysis
- Average Complexity: 4.2
- Maximum Complexity: 18
- Functions Needing Attention: 8
## Recommendations
- Refactor 3 highly complex functions
- Address 3 high-severity code smells
- Extract duplicated code in 5 locations
Note: This skill provides guidance and automated checks. Final decisions on code quality depend on project context, team conventions, and business requirements.