From code-forge
Debug and fix bugs with interactive upstream trace-back — diagnoses root cause level, confirms upstream document updates, and applies TDD fixes. Supports --repos flag for parallel bug fixing across multiple repositories.
npx claudepluginhub tercel/tercel-claude-plugins --plugin code-forgeThis skill uses the workspace's default tool permissions.
@../shared/execution-entrypoint.md
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
@../shared/execution-entrypoint.md
For this skill: start at Step 0.1 (Multi-Repo Detection), then Step 0.5. If you catch yourself about to say "falling back to manual fix", STOP and go to the indicated step.
Systematically debug and fix bugs with interactive trace-back to upstream documents (task descriptions, plans, requirements).
/code-forge:fix login page returns 500 error # Diagnose and fix from description
/code-forge:fix @issues/bug-123.md # Fix from bug report file
/code-forge:fix --review user-auth # Batch-fix all issues from review report
/code-forge:fix --review # Auto-detect review report and fix
/code-forge:fix "connection pool exhaustion" --repos ~/api-python ~/api-typescript ~/api-rust
Bug Input → Context Scan → Feature Association → Root Cause → Trace-back → TDD Fix → Doc Sync → Summary
@../shared/configuration.md
If $ARGUMENTS does not contain --repos, skip this step and continue below.
Note: --review and --repos cannot be combined. If both are present, show error: "Cannot combine --review with --repos. Review reports are per-repo — run /code-forge:fix --review in each repo separately."
If --repos is present (without --review): first, locate the skill installation directory by finding this SKILL.md file's parent (use Glob for **/skills/fix/SKILL.md if needed). Then dispatch Agent(subagent_type="general-purpose", description="Fix --repos coordinator") with prompt containing the resolved absolute paths:
Read these two files and follow them exactly:
{resolved_absolute_path}/skills/shared/multi-repo.md— the protocol steps MR-1~MR-5{resolved_absolute_path}/skills/fix/multi-repo-defs.md— the fix-specific definitionsUser arguments: $ARGUMENTS
Then stop — do not continue to Step 0.5.
Before diagnosing, understand the project's architecture and tech stack:
@../shared/project-analysis.md
Execute PA.1 (Project Profile) and PA.2 (Architecture Analysis). This context determines:
Accept input in three modes:
/code-forge:fix login page returns 500 error) — use the text as bug description/code-forge:fix @issues/bug-123.md) — read the file for bug details/code-forge:fix --review [feature-name]) — read the review report and batch-fix all issues. See Step 1R below.Before treating input as prompt text, check if it looks like a file/directory path without @. If the input does NOT start with @ and is NOT --review, but matches ANY of these patterns, it is likely a path the user forgot to prefix with @:
/ (e.g., issues/bug-123.md, ../bugs/report.md). (e.g., ./bug-report.md).md (e.g., bug-123.md)Action: Use AskUserQuestion:
Your input looks like a file path: "{input}"
Did you mean to use file mode? (file paths require an @ prefix)
@ and treat as file referenceIf no input provided, use AskUserQuestion to ask: "Describe the bug you encountered."
For prompt text and file reference modes, store the bug description and continue to Step 2.
For --review mode, go to Step 1R.
Locate and parse the review report. The review report lives in the current conversation context (displayed by a prior /code-forge:review run). A saved file on disk is the fallback.
Source priority:
Conversation context (primary): Look for the most recent review report output in the current conversation. The report starts with # Code Review: or # Project Review: and contains the structured dimension sections. If found, use it directly.
Saved file (fallback): Only if no review report exists in the current conversation:
--review my-feature): read {output_dir}/{feature_name}/review.md--review): search {output_dir}/project-review.md first, then scan {output_dir}/*/review.mdAskUserQuestion to let user selectNeither found: Show error — "No review report found in this session. Run /code-forge:review first, or /code-forge:review --save to persist to disk."
Parse issues from the report:
blocker, critical, and warningsuggestion-level issues (these are optional improvements, not bugs)Present issue summary and confirm:
Display:
Review Report: {feature_name or "project"}
Found {N} issues to fix:
Blockers: {count}
Criticals: {count}
Warnings: {count}
Issues:
1. [{severity}] {file}:{line} — {title}
2. [{severity}] {file}:{line} — {title}
...
Use AskUserQuestion: "Fix all {N} issues? Or enter issue numbers to fix selectively (e.g., 1,3,5)."
Batch execution:
For each selected issue (or group of issues in the same file):
title + description + suggestion as the bug descriptionfile, line, description, and suggestion — use these directly as the diagnosis. Only spawn a diagnostic sub-agent (Step 4) if the review's suggestion is vague or the fix is non-obvious.suggestion as guidance)fix: {issue title} ({file}:{line})✅ [{i}/{N}] Fixed: {title} ({file}:{line})
Update state.json (if associated with a feature):
Add a single aggregated fix record to the fixes array:
{
"bug": "review fixes — {N} issues",
"root_cause_level": 1,
"root_cause": "Code-level issues identified by review",
"fixed_files": ["path/to/file1.ext", "path/to/file2.ext"],
"commits": ["abc1234", "def5678"],
"doc_updates": [],
"fixed_at": "ISO timestamp"
}
Display batch summary:
Review Fixes Complete: {feature_name or "project"}
Fixed: {fixed_count}/{total_count} issues
Blockers: {fixed_blockers}/{total_blockers}
Criticals: {fixed_criticals}/{total_criticals}
Warnings: {fixed_warnings}/{total_warnings}
{If any failed:}
⚠ Could not fix:
- {title} ({file}:{line}) — {reason}
Commits:
{commit hashes}
Next steps:
/code-forge:review {feature_name} Re-run review to verify fixes
/code-forge:status {feature_name} View updated progress
Review mode ends here — do NOT continue to Step 2.
Scan the project codebase to understand the context:
Glob to get project structure overviewGrep to search for keywords from the bug description in the codebaseStore a concise context summary (~500 words).
Attempt to associate the bug with an existing code-forge feature:
{output_dir}/*/state.json and .code-forge/tmp/*/state.json for all featurestasks/*.md → "Files Involved" sections)plan.md and relevant tasks/*.md as additional context. Note the feature name.If multiple features match, use AskUserQuestion to let user select the most relevant one.
Offload to sub-agent for deep analysis.
Spawn an Agent tool call with:
subagent_type: "general-purpose"description: "Diagnose bug root cause"Sub-agent prompt must include:
Root cause levels:
| Level | Description | Example |
|---|---|---|
| 1 | Code bug | Logic error, boundary miss, typo, wrong variable |
| 2 | Incomplete task description | Task.md steps missing a case, wrong acceptance criteria |
| 3 | Plan design flaw | Architecture doesn't handle a scenario, missing component |
| 4 | Incomplete requirement doc | Original feature doc missing a requirement |
Sub-agent must return:
ROOT_CAUSE_LEVEL: <1-4>
ROOT_CAUSE_SUMMARY: <1-2 sentence description>
AFFECTED_FILES:
- path/to/file.ext: <what's wrong>
UPSTREAM_DOCS_AFFECTED: (only if Level >= 2)
- Level 2: tasks/<task>.md — <what's missing/wrong>
- Level 3: plan.md — <what's missing/wrong>
- Level 4: {input_dir}/<feature>.md — <what's missing/wrong>
PROPOSED_FIX: <brief fix description>
REGRESSION_TEST: <what test to write>
This step only runs if Level >= 2 AND the bug is associated with a feature.
Present the root cause analysis to the user, then confirm upstream updates level by level.
Display:
Root Cause Analysis
Level: {level} — {level_description}
Summary: {root_cause_summary}
Affected code:
{affected_files list}
Upstream documents affected:
{upstream_docs_affected list}
For each affected upstream level (from lowest to highest), use AskUserQuestion:
"Root cause traced to {level_name}: {doc_path} — {what's wrong}. Update this document?"
Options:
Compile the fix plan:
Execute the fix following TDD methodology.
Mandatory before 6.1: Apply the design-first discipline. Bug fixes are the most common site of patch-soup development — adding a special case to compensate for buggy logic, instead of fixing the underlying logic, is the textbook "bug-fix epicycle" anti-pattern. Read the affected subsystem fully, understand why the bug exists, and determine whether the right fix is a localized correction or a small refactor of the surrounding code. Do not add an if special_case: branch when the underlying logic is wrong — fix the logic. Public interfaces remain stable throughout. The full discipline:
@../shared/design-first.md
Write a test that reproduces the bug:
Run the test to verify it fails.
Make the minimal code changes to fix the bug — but "minimal" means "minimal correct", not "minimal lines". A two-line patch that papers over a broken function is worse than a ten-line refactor that fixes the function. If the design-first checklist (above) revealed that the bug stems from a structural issue, fix the structure rather than adding compensating code around it. If you choose a localized patch over a refactor, briefly note in the commit message why the refactor was deferred so future-you can revisit.
Run the regression test to verify it passes.
Run the project's full test suite to ensure no regressions:
Commit the code changes with a descriptive message:
fix: {brief description of bug fix}
This step only runs if upstream documents were confirmed for update in Step 5.
For each confirmed document update:
Before modifying each document, show the proposed changes in diff format:
--- a/{doc_path}
+++ b/{doc_path}
@@ ... @@
- old content
+ new content
Ask user: "Apply this change?" (Yes / No / Edit manually)
Commit upstream document changes separately from code fix:
docs: update {doc_path} — traced from bug fix
If the bug is associated with a feature:
state.jsonfixes array (if not present) to the feature-level metadata{
"bug": "brief bug description",
"root_cause_level": 2,
"root_cause": "brief root cause",
"fixed_files": ["path/to/file.ext"],
"commits": ["abc1234"],
"doc_updates": ["tasks/auth-logic.md"],
"fixed_at": "ISO timestamp"
}
state.json updated timestampIf standalone bug (no associated feature): skip this step.
Display fix summary:
Bug Fix Complete
Bug: {description}
Root Cause: Level {level} — {summary}
Code Changes:
{files changed}
Regression Test:
{test file and name}
Document Updates:
{list of updated docs, or "none"}
Commits:
{commit hashes}
Next steps:
/code-forge:status {feature} View updated progress
/code-forge:review {feature} Review all changes