This skill should be used when a command needs the full implement-and-review cycle — spawning an implementation worker, running a code review gate, and making a post-review decision. Used by /forge:start-task and /implement. Do not use for review-only or implementation-only workflows.
From forgenpx claudepluginhub flox/forge-plugin --plugin forgeThis skill uses the workspace's default tool permissions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
Thin dispatcher pattern for the implement-then-review pipeline. The main context manages control flow and reads structured signals from sub-agents. All analysis and synthesis happens inside sub-agents, not here.
Commands load this skill when they need to:
Commands that use this skill:
/forge:start-task/forge:implementSpawn the implementation worker and read its completion signal. The worker creates a draft PR and returns a structured report.
Use Task tool with:
- subagent_type: "Implementation Worker"
- mode: "bypassPermissions"
- prompt: |
## Inputs
{ticket/change context from calling command}
Read signal from worker:
{ pr_url, commit_sha, status: completed|blocked }
status: blocked: report blocker to user, stop.status: completed: proceed to Step 2.Three paths based on change characteristics. The calling command selects the path; this skill documents each.
For changes <=50 lines, low risk, well-established patterns. Spawn a single code-reviewer in sequential mode.
Use Task tool with:
- subagent_type: "Code Reviewer"
- mode: "bypassPermissions"
- prompt: |
## Inputs
- pr_url: {from Step 1 signal}
- project: {project}
- feature_path: {feature_path}
Execute the review and post results to the PR.
bypassPermissions is required. The reviewer uses
shell commands to post the review to GitHub and to retrieve
the project SHA for the signature. Without
bypassPermissions, shell calls will be auto-denied in
non-interactive sessions.
Read signal from reviewer:
{ findings: [...], severity_max, passed }
For changes >50 lines, security-sensitive areas, or new architectural patterns. Spawn 3 lens sub-agents in a single message, then synthesize.
Spawn 3 lenses in one message (parallel):
# Lens A: Security & Correctness
Use Task tool with:
- subagent_type: "general-purpose"
- prompt: |
Review PR {pr_url} through security and correctness
lens. Check for: injection vectors, auth bypasses,
data validation gaps, logic errors, race conditions.
Return structured findings with severity labels.
# Lens B: Performance & Architecture
Use Task tool with:
- subagent_type: "general-purpose"
- prompt: |
Review PR {pr_url} through performance and
architecture lens. Check for: N+1 queries,
unbounded allocations, layering violations.
Return structured findings with severity labels.
# Lens C: Conventions & Tests
Use Task tool with:
- subagent_type: "general-purpose"
- prompt: |
Review PR {pr_url} through conventions and testing
lens. Check for: style violations, missing tests,
test quality, naming conventions, doc gaps.
Return structured findings with severity labels.
After all 3 complete, spawn synthesis sub-agent:
Use Task tool with:
- subagent_type: "general-purpose"
- mode: "bypassPermissions"
- prompt: |
Synthesize these three code review perspectives
into a single review post for PR {pr_url}.
## Lens A: Security & Correctness
{paste Lens A output}
## Lens B: Performance & Architecture
{paste Lens B output}
## Lens C: Conventions & Tests
{paste Lens C output}
Deduplicate findings, assign final severity,
and post the synthesized review to the PR.
Include project signature.
Read signal from synthesis:
{ findings: [...], severity_max, passed }
When CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS is enabled
and the user opts in. Create a team with 3 lens
teammates that can cross-pollinate findings in real-time.
Prompt user before using teams:
question: "Use agent team for richer code review?
Teammates can challenge each other's findings."
options:
- "Use a team (Recommended)"
- "Standard parallel mode"
If user selects teams, create a team with one teammate per lens. Otherwise fall back to Path B.
Read the review signal and decide:
Critical or Important findings exist:
Minor or no findings:
gh pr ready {pr_number}The main context (command) acts as a thin dispatcher:
All analytical work happens inside sub-agents. The main context budget for this pipeline is ~100 lines of signal handling.
This skill is self-contained. It does not reference or depend on any other orchestration skill. Commands may load this skill sequentially after other orchestration skills complete, but that is command-level sequencing, not orchestration nesting.