Use when reviewing code, handling review feedback, or posting a review to a GitHub PR — 14-dimension quality analysis for features or entire projects (generate mode), structured evaluation and response to incoming review comments (feedback mode via --feedback flag), or automated PR review posted as a GitHub comment (--github-pr flag).
npx claudepluginhub tercel/code-forgeThis skill inherits all available tools. When active, it can use any tool Claude has access to.
feedback-workflow.mdgithub-pr-workflow.mdComprehensive code review against reference documents and engineering best practices. Covers functional correctness, security, resource management, code quality, architecture, performance, testing, error handling, observability, maintainability, backward compatibility, and dependency safety.
Supports four modes:
plan.md--feedback)--github-pr)--feedback)--github-pr)Config → Determine Mode → Locate Reference → Collect Scope → Multi-Dimension Review (sub-agent) → Display Report → Update State → Summary
The review analysis is offloaded to a sub-agent to handle large diffs without exhausting the main context.
All issues use a 4-tier severity system, ordered by merge-blocking priority:
| Severity | Symbol | Meaning | Merge Policy |
|---|---|---|---|
blocker | :no_entry: | Production risk. Data loss, security breach, crash. | Must fix before merge |
critical | :warning: | Significant quality/correctness concern. | Must fix before merge |
warning | :large_orange_diamond: | Recommended fix. Could cause issues over time. | Should fix |
suggestion | :blue_book: | Nice-to-have improvement. Can address later. | Nice-to-have |
The following dimensions are used in both Feature Mode and Project Mode reviews. They are ordered by priority tier.
Does the code actually implement what it should? This is the highest-priority dimension.
Check items:
Does the code introduce any security risk?
Check items:
.env committed — use environment variables + secret managerAre all acquired resources properly released? This is especially critical for long-running services.
Check items:
addEventListener without removeEventListener on cleanupsetInterval/setTimeout without clearInterval/clearTimeoutfinally/defer/using/withuseEffect cleanup, Angular OnDestroy, Vue onUnmounted, iOS deinitIs the code clear, maintainable, and following project conventions?
Check items:
data, temp, obj, item, info, val, process(), handle(), doIt(), Manager, Util, Helper) — qualified forms like userData, handleClick(), ConnectionManager are fine; follow language ecosystem conventionsCLAUDE.md for team-specific thresholds)if/elseTODO / HACK / FIXME should include enough context to be actionable laterDoes the change fit the project's architectural conventions?
Check items:
Are there obvious performance problems on hot paths?
Check items:
await in loops when Promise.all is appropriateAre critical paths tested? Are tests meaningful?
Check items:
Are errors properly caught, classified, reported, and recovered from?
Check items:
Exception / Error / object instead of specific types.catch(), missing error boundaries (React)Can you debug and monitor this code in production?
Check items:
Does the code follow team and project conventions?
Check items:
Will this change break existing consumers or complicate deployment?
Check items:
Does this change leave the codebase better or worse?
Check items:
Are new or updated dependencies safe and justified?
Check items:
latest or *Is the UI usable by all users? (Skip this dimension for backend-only projects.)
Check items:
aria-label, role, statesThese rules apply to both Feature Mode and Project Mode reviews:
project_type is "frontend" or "fullstack".@../shared/configuration.md
Parse the user's arguments to determine which mode to use.
--github-pr Flag ProvidedIf the user passed --github-pr (e.g., /code-forge:review --github-pr or /code-forge:review --github-pr 123):
→ GitHub PR Mode — Read and follow skills/review/github-pr-workflow.md. Do NOT continue with the steps below.
--feedback Flag ProvidedIf the user passed --feedback (e.g., /code-forge:review --feedback or /code-forge:review --feedback #123):
→ Feedback Mode — Read and follow skills/review/feedback-workflow.md. Do NOT continue with the steps below.
If the user provided a feature name (e.g., /code-forge:review user-auth):
→ Feature Mode — go to Step 2F
--project Flag ProvidedIf the user passed --project (e.g., /code-forge:review --project):
→ Project Mode with scope = "full" — go to Step 2P
If no arguments provided:
{output_dir}/*/state.json for all features"completed" taskscope = "changes" automaticallyscope = "changes" automaticallyAskUserQuestion to let user select
scope = "changes"{output_dir}/{feature_name}/state.jsonstate.jsonplan.md (for acceptance criteria and architecture)→ Go to Step 3F
Determine the reference level using a fallback chain.
Scan {output_dir}/*/plan.md:
plan.md files found → planning-backedplan.md files and aggregate:
state.json files for progress contextreference_level = "planning"If no planning documents found, scan for upstream documentation:
Search paths (in order):
{input_dir}/*.md — feature specsdocs/ directory — PRD, SRS, tech-design, test-plan filesLook for files matching patterns:
**/prd.md, **/srs.md, **/tech-design.md, **/test-plan.md**/features/*.md.md files directly under docs/If documentation files found → docs-backed:
reference_level = "docs"If neither planning nor docs found → bare:
reference_level = "bare"From Commits:
Extract all commit hashes from state.json → tasks[].commits:
git diff between the earliest and latest commitsFrom Task Files:
Read all tasks/*.md files and collect their "Files Involved" sections:
Summary:
Before launching the sub-agent, detect the project type to guide dimension selection:
*.tsx, *.jsx, *.vue, *.svelte, HTML templates, CSS/SCSS files, or frontend framework config (next.config.*, vite.config.*, angular.json)Record: project_type = "frontend" | "backend" | "fullstack" | "library" | "cli" | "unknown"
Offload to sub-agent to handle the full diff analysis.
Spawn a Task tool call with:
subagent_type: "general-purpose"description: "Review feature: {feature_name}"Sub-agent prompt must include:
plan.md file pathplan.mdReview dimensions to apply: Follow Dimension Application Rules.
Additionally, always check Plan Consistency (feature mode specific):
plan.md are metplan.mdSub-agent must return the following structured format:
REVIEW_SUMMARY:
overall_rating: <pass | pass_with_notes | needs_changes>
total_issues: <number>
blocker_count: <number>
critical_count: <number>
warning_count: <number>
suggestion_count: <number>
merge_readiness: <ready | fix_required | rework_required>
dimensions_reviewed: <list of dimension IDs reviewed>
FUNCTIONAL_CORRECTNESS: # D1
rating: <pass | warning | critical>
issues:
- severity: <blocker | critical | warning | suggestion>
file: path/to/file.ext
line: <number or range>
title: <short title>
description: <what's wrong and why it matters>
suggestion: <how to fix>
SECURITY: # D2
rating: <pass | warning | critical>
issues:
- severity: <blocker | critical | warning | suggestion>
file: path/to/file.ext
line: <number or range>
title: <short title>
description: <what's wrong and why it matters>
suggestion: <how to fix>
RESOURCE_MANAGEMENT: # D3
rating: <pass | warning | critical>
issues:
- severity: <blocker | critical | warning | suggestion>
file: path/to/file.ext
line: <number or range>
title: <short title>
description: <what's wrong and why it matters>
suggestion: <how to fix>
CODE_QUALITY: # D4
rating: <good | acceptable | needs_work>
issues:
- severity: <critical | warning | suggestion>
file: path/to/file.ext
line: <number or range>
title: <short title>
description: <what's wrong and why it matters>
suggestion: <how to fix>
ARCHITECTURE: # D5
rating: <good | acceptable | needs_work>
issues:
- severity: <critical | warning | suggestion>
file: path/to/file.ext
line: <number or range>
title: <short title>
description: <what's wrong and why it matters>
suggestion: <how to fix>
PERFORMANCE: # D6
rating: <good | acceptable | needs_work>
issues:
- severity: <critical | warning | suggestion>
file: path/to/file.ext
line: <number or range>
title: <short title>
description: <what's wrong and why it matters>
suggestion: <how to fix>
TEST_COVERAGE: # D7
rating: <good | acceptable | needs_work>
coverage_gaps:
- severity: <critical | warning | suggestion>
file: path/to/source.ext
description: <what scenario is untested>
ERROR_HANDLING_AND_OBSERVABILITY: # D8 + D9
rating: <good | acceptable | needs_work>
issues:
- severity: <warning | suggestion>
file: path/to/file.ext
line: <number or range>
category: <error_handling | logging | metrics | tracing>
title: <short title>
description: <what's wrong and why it matters>
suggestion: <how to fix>
MAINTAINABILITY_AND_COMPATIBILITY: # D10 + D11 + D12 + D13
rating: <good | acceptable | needs_work>
issues:
- severity: <warning | suggestion>
file: path/to/file.ext
line: <number or range>
category: <standards | backward_compat | tech_debt | dependencies>
title: <short title>
description: <what's wrong and why it matters>
suggestion: <how to fix>
ACCESSIBILITY: # D14 (frontend/fullstack only)
rating: <good | acceptable | needs_work | skipped>
issues:
- severity: <warning | suggestion>
file: path/to/file.ext
line: <number or range>
title: <short title>
description: <what's wrong and why it matters>
suggestion: <how to fix>
PLAN_CONSISTENCY:
criteria_met: <X/Y>
unmet_criteria:
- <criterion not met>
scope_issues:
- <unplanned additions or missing planned features>
→ Go to Step 4F
The primary subject of review is the source code itself. Reference documents (plans, specs) serve only as criteria to check against — the sub-agent must deeply read and analyze the actual implementation.
Identify and collect project source files for deep code review. The collection strategy depends on scope (set in Step 1):
If scope = "changes" (default — no arguments or auto-selected):
Identify changed files (primary scope):
git diff main...HEAD --name-onlygit diff HEAD --name-only + git diff --cached --name-only (staged + unstaged)git diff HEAD~1 --name-only (last commit)node_modules/, dist/, build/, .git/, vendor/, __pycache__/, the output directory itselfExpand to impact zone (1 level): For each changed file, also include:
Grep to find import/require/use statements referencing the changed file)foo.test.ts for foo.ts)Fallback to full scan: Only if no changed files are found (clean repo, no recent commits), fall through to the scope = "full" strategy below.
If scope = "full" (--project flag):
src/, lib/, app/, pkg/, or language-specific patterns)node_modules/, dist/, build/, .git/, vendor/, __pycache__/, the output directory itselfBoth modes also collect:
package.json, Cargo.toml, pyproject.toml, etc.) for dependency reviewSame as Step 3F.2 — detect project_type to guide dimension selection.
Offload to sub-agent to handle deep source code analysis.
Spawn a Task tool call with:
subagent_type: "general-purpose"description: "Project code review: {project_name}"Sub-agent prompt must include:
planning / docs / bare) and associated criteria (if any)Review dimensions to apply: Follow Dimension Application Rules.
Additionally, apply the appropriate Consistency check based on reference level:
planning-backed → Plan Consistency:
docs-backed → Documentation Consistency:
bare → Skip this dimension. Note in the report: "No reference documents found — consistency check skipped."
Sub-agent must return the following structured format:
All issues MUST reference specific source files and line numbers/ranges.
REVIEW_SUMMARY:
overall_rating: <pass | pass_with_notes | needs_changes>
total_issues: <number>
blocker_count: <number>
critical_count: <number>
warning_count: <number>
suggestion_count: <number>
merge_readiness: <ready | fix_required | rework_required>
reference_level: <planning | docs | bare>
dimensions_reviewed: <list of dimension IDs reviewed>
FUNCTIONAL_CORRECTNESS: # D1
rating: <pass | warning | critical>
issues:
- severity: <blocker | critical | warning | suggestion>
file: path/to/file.ext
line: <number or range>
title: <short title>
description: <what's wrong and why it matters>
suggestion: <how to fix>
SECURITY: # D2
rating: <pass | warning | critical>
issues: [same structure as D1]
RESOURCE_MANAGEMENT: # D3
rating: <pass | warning | critical>
issues: [same structure as D1]
CODE_QUALITY: # D4
rating: <good | acceptable | needs_work>
issues:
- severity: <critical | warning | suggestion>
file: path/to/file.ext
line: <number or range>
title: <short title>
description: <what's wrong and why it matters>
suggestion: <how to fix>
ARCHITECTURE: # D5
rating: <good | acceptable | needs_work>
issues: [same structure as D4]
PERFORMANCE: # D6
rating: <good | acceptable | needs_work>
issues: [same structure as D4]
TEST_COVERAGE: # D7
rating: <good | acceptable | needs_work>
coverage_gaps:
- severity: <critical | warning | suggestion>
file: path/to/source.ext
description: <what scenario is untested>
ERROR_HANDLING_AND_OBSERVABILITY: # D8 + D9
rating: <good | acceptable | needs_work>
issues:
- severity: <warning | suggestion>
file: path/to/file.ext
line: <number or range>
category: <error_handling | logging | metrics | tracing>
title: <short title>
description: <what's wrong and why it matters>
suggestion: <how to fix>
MAINTAINABILITY_AND_COMPATIBILITY: # D10 + D11 + D12 + D13
rating: <good | acceptable | needs_work>
issues:
- severity: <warning | suggestion>
file: path/to/file.ext
line: <number or range>
category: <standards | backward_compat | tech_debt | dependencies>
title: <short title>
description: <what's wrong and why it matters>
suggestion: <how to fix>
ACCESSIBILITY: # D14 (frontend/fullstack only)
rating: <good | acceptable | needs_work | skipped>
issues:
- severity: <warning | suggestion>
file: path/to/file.ext
line: <number or range>
title: <short title>
description: <what's wrong and why it matters>
suggestion: <how to fix>
CONSISTENCY:
type: <plan_consistency | doc_consistency | skipped>
rating: <good | acceptable | needs_work | N/A>
criteria_met: <X/Y> (if applicable)
unmet_criteria:
- <criterion not met>
scope_issues:
- <unplanned additions or missing documented features>
→ Go to Step 4P
Review results are displayed in the terminal by default — no file is written. This reflects that reviews are iterative, intermediate checks rather than permanent artifacts.
Display the following report directly in the terminal using markdown:
# Code Review: {feature_name}
**Date:** {ISO date}
**Reviewer:** code-forge
**Overall Rating:** {pass | pass_with_notes | needs_changes}
**Merge Readiness:** {ready | fix_required | rework_required}
## Summary
{1-2 paragraph summary of the review findings}
**Issue Breakdown:** {blocker_count} blockers · {critical_count} critical · {warning_count} warnings · {suggestion_count} suggestions
---
## Tier 1 — Must-Fix Before Merge
### Functional Correctness (D1)
**Rating:** {rating}
{issues table with severity/file/line/title/description/suggestion, or "No issues found"}
### Security (D2)
**Rating:** {rating}
{issues or "No security concerns"}
### Resource Management (D3)
**Rating:** {rating}
{issues or "No resource management issues"}
---
## Tier 2 — Should-Fix
### Code Quality (D4)
**Rating:** {rating}
{issues or "No issues found"}
### Architecture & Design (D5)
**Rating:** {rating}
{issues or "No issues found"}
### Performance (D6)
**Rating:** {rating}
{issues or "No issues found"}
### Test Coverage (D7)
**Rating:** {rating}
{coverage gaps or "All scenarios covered"}
---
## Tier 3 — Recommended
### Error Handling & Observability (D8/D9)
**Rating:** {rating}
{issues or "No issues found"}
---
## Tier 4 — Nice-to-Have
### Maintainability & Compatibility (D10–D13)
**Rating:** {rating}
{issues or "No issues found"}
{If frontend/fullstack:}
### Accessibility / i18n (D14)
**Rating:** {rating}
{issues or "Skipped (not a frontend project)"}
---
## Plan Consistency
**Criteria Met:** {X/Y}
{unmet criteria or "All criteria met"}
---
## Recommendations
{Prioritized list of changes, grouped by blocking status:}
**Must fix before merge:**
1. {highest priority fix with file:line reference}
2. ...
**Should fix:**
1. {recommended fix}
2. ...
**Consider for later:**
1. {nice-to-have improvement}
2. ...
## Verdict
{Final assessment: merge as-is, fix blockers/criticals then merge, or needs rework}
--save)If the user passed --save in the arguments, also write the report to {output_dir}/{feature_name}/review.md. Otherwise, do NOT create the file.
→ Go to Step 5F
Use the same report template as Feature Mode (see Step 4F), with these differences:
# Project Review: {project_name} (instead of feature name)**Scope:** {changes (N changed + M related files) | full (N source files)}**Reference:** {planning-backed | docs-backed | bare (no reference documents)}## Plan Consistency with criteria met/unmet## Documentation Consistency with criteria met/unmet*No reference documents found — consistency check skipped.*--save)If the user passed --save in the arguments, also write the report to {output_dir}/project-review.md. Otherwise, do NOT create the file.
→ Go to Step 5P
state.jsonreview field in metadata:
{
"review": {
"date": "ISO timestamp",
"rating": "pass_with_notes",
"merge_readiness": "fix_required",
"total_issues": 12,
"blockers": 0,
"criticals": 2,
"warnings": 6,
"suggestions": 4
}
}
--save was used, also include "report": "review.md" in the review objectstate.json updated timestamp→ Go to Step 6
Project mode does not update any state.json — there is no single feature state to track.
→ Go to Step 6
Display:
Code Review Complete: {feature_name}
Rating: {overall_rating}
Merge Readiness: {merge_readiness}
Issues: {total_issues} ({blocker_count} blockers, {critical_count} critical, {warning_count} warnings, {suggestion_count} suggestions)
{If --save was used:}
Report saved: {output_dir}/{feature_name}/review.md
{If needs_changes (blockers or criticals > 0):}
🚫 Merge blocked — fix these first:
1. {highest priority blocker/critical with file:line}
2. {next priority fix}
...
Fix all: /code-forge:fixbug --review
Re-review: /code-forge:review {feature_name}
{If pass_with_notes (warnings > 0, no blockers/criticals):}
⚠ Merge OK with notes — consider fixing:
1. {top warning}
2. ...
Fix all: /code-forge:fixbug --review
{If pass:}
✅ Ready for next steps:
/code-forge:status {feature_name} View final status
Create a Pull Request
Tip: use --save to persist the review report to disk
Display:
Project Review Complete: {project_name}
Rating: {overall_rating}
Merge Readiness: {merge_readiness}
Reference: {planning-backed (N plans) | docs-backed (N documents) | bare}
Issues: {total_issues} ({blocker_count} blockers, {critical_count} critical, {warning_count} warnings, {suggestion_count} suggestions)
{If --save was used:}
Report saved: {output_dir}/project-review.md
{If needs_changes (blockers or criticals > 0):}
🚫 Issues require attention:
1. {highest priority blocker/critical with file:line}
2. {next priority fix}
...
Fix all: /code-forge:fixbug --review
Re-review: /code-forge:review --project
{If pass_with_notes (warnings > 0, no blockers/criticals):}
⚠ Project quality acceptable with notes — consider fixing:
1. {top warning}
2. ...
Fix all: /code-forge:fixbug --review
{If pass:}
✅ Project quality looks good.
Tip: use --save to persist the review report to disk
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.