From forge
Run codebase analysis via Analyst agent. Triggers: "forge analyze", "analyze codebase", "code analysis", "architecture analysis", "impact analysis", "dependency trace", and first-principles redesign requests that need an assumption audit plus execution plan.
npx claudepluginhub cjy5507/forge --plugin forgeThis skill uses the workspace's default tool permissions.
<Purpose>
Implements structured self-debugging workflow for AI agent failures: capture errors, diagnose patterns like loops or context overflow, apply contained recoveries, and generate introspection reports.
Monitors deployed URLs for regressions in HTTP status, console errors, performance metrics, content, network, and APIs after deploys, merges, or upgrades.
Provides React and Next.js patterns for component composition, compound components, state management, data fetching, performance optimization, forms, routing, and accessible UIs.
This skill must produce a durable analysis artifact and analysis metadata, not just an ephemeral chat answer.
When the user is stuck in obvious solutions, the analysis must follow a harness-engineering shape instead of a generic review:
Forge does not bundle codebase-memory-mcp servers in the package manifest. If the active host does not provide them, the Analyst must downgrade to direct file/module inspection with explicit confidence notes instead of pretending graph-backed precision.
<Use_When>
<Do_Not_Use_When>
Ask the user a question only if the analysis target is materially ambiguous and the result would otherwise point at the wrong system, workflow, or failure mode.
Allowed examples:
Not allowed:
If ambiguity is internal rather than customer-owned, document the assumption and continue.
Based on the user's request, select one or more:
| Request | Analysis Type | Primary Tool |
|---|---|---|
| "architecture analysis", "map the codebase" | Architecture Mapping | get_architecture, search_graph |
| "what breaks if I change X?", "impact analysis" | Impact Analysis | trace_call_path, detect_changes |
| "dependency trace", "who calls X?" | Dependency Tracing | trace_call_path, query_graph |
| "code quality", "dead code", "complexity" | Quality Assessment | search_graph (degree filters) |
| "problem is obvious", "rethink this", "redesign from first principles" | First-Principles Redesign | Analyst + repo/runtime evidence |
| "design improvement", "UX improvement", "redesign", "improve the flow" | Design-Improvement Analysis | Analyst + repo/runtime evidence |
| "why does this harness keep asking / stopping / under-using agents?" | Behavioral Audit | Runtime + event evidence |
| No specific request | Architecture Mapping (default) | get_architecture |
Check if codebase-memory-mcp has current data:
Before choosing the analysis path, inspect whether the current graph is sufficient for the requested mode.
search_code and LSP/grep fallbackDispatch the Analyst agent (forge:analyst) with the selected analysis type:
Agent(subagent_type="forge:analyst", prompt="Run {analysis_type} on {target}")
The Analyst uses codebase-memory-mcp tools:
The Analyst agent MUST emit a single JSON block conforming to
.forge/contracts/analyst-report.ts (the AnalystReport discriminated union).
No free-form markdown templates. The shape is mechanically enforced by
scripts/lib/forge-analyst-schema.mjs::validateAnalystReport, which is called
in Step 5 before the record is accepted — if the payload does not match the
contract, record-analysis exits non-zero and you must re-dispatch the
Analyst with the error message so it can fix its output.
Envelope fields (required on every kind):
version: '1'kind: one of architecture | impact | dependency | quality | first-principles | design-improvement | behavioral-auditgenerated_at (ISO 8601), target, graph_health, confidence,
risk_level, locale, summarybody: kind-specific object (see contract for required fields per kind)recommendations: array of { priority, action, owner_role? }Analyses with design, workflow, or problem-framing weight still need the
first-principles reframe + execution plan substance — that substance lives
inside body for the first-principles and design-improvement kinds
(assumptions, essence, inverted_design, action_plan, verification,
etc). Do not re-format it as markdown before saving; downstream consumers
read the JSON directly.
If a Forge project is active (.forge/state.json exists):
extractJson handles both raw and fenced forms) to the
kind-appropriate artifact path:
.forge/design/codebase-analysis.md for
architecture / impact / dependency / quality / first-principles.forge/design/ux-analysis.md for design-improvement.forge/evidence/behavioral-audit.md for behavioral-auditnode scripts/forge-lane-runtime.mjs record-analysis --type <kind> --target <target> --artifact <artifact-path> --locale <ko|en|ja|zh> --graph-health <health> --confidence <level> --risk <level> --summary "<summary>"record-analysis rejected for ... AnalystReport v1 validation failed: ...), then re-dispatch the Analyst with the error so it can correct the
payload. Legacy free-form markdown is explicitly rejected by the validator.node scripts/forge-lane-runtime.mjs analysis-status --json when checking
freshness before design/develop/fix.design-improvement and the validated record was accepted,
immediately hand off to forge:design in UX-opening mode.<Tool_Usage>
<State_Changes>
.forge/design/codebase-analysis.md, .forge/design/ux-analysis.md, or .forge/evidence/behavioral-audit.md