From code-forge
Analyze documentation (or a prompt) and generate an implementation plan with task breakdown, TDD steps, and progress tracking. Use when breaking down a feature, creating tasks from docs or requirements, planning implementation work, or turning a spec into actionable steps.
npx claudepluginhub tercel/tercel-claude-plugins --plugin code-forgeThis skill uses the workspace's default tool permissions.
@../shared/execution-entrypoint.md
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
@../shared/execution-entrypoint.md
For this skill: start at Step 0 (Configuration), then Step 0.5 (Project Analysis), then Step 1. If you catch yourself about to say "falling back to manual planning", STOP and go to the indicated step.
Generate an implementation plan from a feature document or a requirement prompt.
ALL OUTPUT GOES INTO {output_dir}/{feature-name}/ AS SEPARATE FILES — overview.md, plan.md, tasks/*.md, state.json.
| Thought | Reality |
|---|---|
| "I'll put everything in one plan.md for simplicity" | Multi-file structure is how impl/status/review find individual tasks. One file breaks all downstream skills. |
| "docs/plan is close enough" | Output dir is {output_dir} (default: planning/). docs/plan, docs/plans are ALL wrong. |
| "I'll create the tasks inline in plan.md" | Tasks go in tasks/{name}.md as separate files. Step 8 sub-agent creates them. |
| "Numeric prefixes help with ordering" | Execution order is in overview.md and state.json. Files are setup.md, not 01-setup.md. |
| "I can skip state.json" | state.json drives impl, status, fix. Without it, no downstream skill works. |
| "The overview files are optional" | Both project-level and feature-level overview.md are mandatory outputs. |
| "The input looks like a path but has no @, I'll treat it as a prompt" | Run the Step 2.0 path-like input guard. Paths without @ are almost always user mistakes — ask before proceeding. |
| "I'll add FE-01- prefixes to feature directories for clarity" | Feature directory names must match the source filename exactly in kebab-case. core-dispatcher, not FE-01-core-dispatcher. |
| "I'll generate all features as flat files in one directory" | Each feature gets its own subdirectory with the full multi-file structure. Flat files break all downstream skills. |
| "Step 4.5 reuse discovery is optional, the user just wants the plan" | Step 4.5 is mandatory. Skipping it is the #1 cause of bloat across spec-forge / code-forge / apcore-skills workflows — the planner ends up generating tasks that recreate utilities, helpers, and entire subsystems that already exist. |
| "I'll just trust the LLM to know what already exists" | The LLM does not know. It must Grep and Read the actual project. Step 4.5 forces this empirically. |
--tmp to avoid adding plan files to the project (writes to .code-forge/tmp/, auto-gitignored)Input → Analysis → Reuse Discovery → Planning → Task Breakdown → Status Tracking
Steps 4, 4.5, 7, and 8 are offloaded to sub-agents via the Agent tool to prevent context window exhaustion on large projects. The main context retains only concise summaries returned by each sub-agent, while full document analysis, reuse discovery, file generation, and code implementation happen in isolated sub-agent contexts that are discarded after completion.
Actual execution order: Steps 0 through 13, in sequential order.
Step 9 (overview.md) executes after Steps 7 and 8 because it references task files generated by those steps.
@../shared/configuration.md
Step 0.5: Project Analysis
Before planning, understand the project's architecture and tech stack. Read and execute:
@../shared/project-analysis.md
Execute PA.1 (Project Profile), PA.2 (Architecture Analysis), and PA.5 (Existing Test Assessment). This ensures:
The Project Context Summary (PA.7) is passed to the sub-agent in Step 4 as part of the context.
Plan-specific additions to Step 0:
reference_docs.sources = [], reference_docs.exclude = []reference_docs.sources must be an array of strings (fall back to [] on error)reference_docs.sources entries must NOT contain .. (security risk)reference_docs.sources entries must NOT point to system directories (node_modules/, .git/, build/)reference_docs.exclude must be an array of strings (fall back to [] on error){output_dir}/{feature_name}/base_dir empty string means project root; input_dir default: docs/features/; output_dir default: planning/This step only runs when reference_docs.sources is non-empty in the merged configuration.
If reference_docs.sources is empty or not configured, skip directly to Step 2.
config.reference_docs.sources against project_rootconfig.reference_docs.exclude patterns to filter results{output_dir}/** to prevent circular referencesReference docs: 0 files matched for configured patterns. Continuing without reference context. → skip to Step 2AskUserQuestion: "Found {N} reference docs. This will spawn {N} parallel sub-agents."
sources/exclude config, stop and let user update .code-forge.jsonDisplay the matched file list:
Reference docs: {count} files matched
{path_1}
{path_2}
...
Proceed directly — no confirmation needed (unless > 30 files triggered 1.1 step 6).
Spawn N parallel sub-agents via Agent tool, one per matched file:
subagent_type: "general-purpose"description: "Summarize reference doc: {filename}"Each sub-agent prompt:
DOC_PATH: {file_path}
DOC_TYPE: <architecture | api | requirements | conventions | data-model | other>
SUMMARY: <2-3 sentence summary of what this document describes>
KEY_DECISIONS: <bulleted list of important technical decisions, constraints, or patterns>
RELEVANCE_TAGS: <comma-separated keywords for matching against feature docs>
Target summary size: ~300-500 bytes per doc.
Error handling: If a sub-agent fails to summarize a file, log a warning and skip that file:
Warning: Failed to summarize {path} — skipping
Reference docs: {success_count} of {total_count} files summarized successfully
Collect all successful sub-agent results into a reference_summaries list (ordered by file path). Store in memory for use by Steps 4, 7, and 8.
After the input document path is known (after Step 3), remove it from reference_summaries if present — the feature doc is already read directly by Steps 4 and 7. This deduplication happens lazily: the summaries are stored now, deduplication is applied when injecting into sub-agent prompts.
This step only runs when the input is NOT a file path (does NOT start with @).
If the input starts with @, skip directly to Step 3.
Before treating input as a prompt, check if it looks like a file/directory path. If the input matches ANY of these patterns, it is almost certainly a path the user forgot to prefix with @:
/ (e.g., ../apcore-cli, docs/features/auth.md). (e.g., ./src, ../other-project).md (e.g., user-auth.md)Action: Do NOT silently proceed as prompt mode. Instead, use AskUserQuestion:
Your input looks like a file/directory path: "{input}"
Did you mean to use file mode? (paths require an @ prefix)
@ and skip to Step 3This guard prevents the common mistake of forgetting @, which causes the entire workflow to bypass Directory/File Mode and produce incorrect output.
When a user provides a text prompt instead of a file path, code-forge:plan delegates feature spec creation to spec-forge:feature. This maintains the separation of concerns: spec-forge owns specification, code-forge owns implementation planning.
Convert the prompt text to a kebab-case slug for the feature name:
user-login-feature)AskUserQuestion to let user confirm or provide a custom slug. Suggest a reasonable English slug based on the prompt meaning.Check if {input_dir}/{slug}.md already exists:
Invoke spec-forge:feature to generate the feature spec:
Launch Agent(subagent_type="general-purpose"):
docs/features/{slug}.md existsIf spec-forge:feature is not available (skill not installed), fall back to generating a minimal feature document directly:
# {Feature Title}
> Feature spec for code-forge implementation planning.
> Source: auto-generated from prompt
> Created: {date}
## Purpose
{user's original prompt text, verbatim}
## Notes
- Generated from prompt by code-forge (spec-forge:feature not available)
- Consider running `/spec-forge:feature {slug}` for a more detailed spec
Set {input_dir}/{slug}.md as the current input document path (prefixed with @), then continue to Step 3.
User should provide an @ path pointing to a file or directory:
# File mode — plan a single feature
/code-forge:plan @docs/features/user-auth.md
# Directory mode — list features and let user pick
/code-forge:plan @docs/features/
/code-forge:plan @../../aipartnerup/apcore
Note: Use configured path ({input_dir}/). Also accepts spec-forge tech-design files directly: /code-forge:plan @docs/user-auth/tech-design.md
If the @ path resolves to a directory (not a file):
<path>/docs/features/*.md<path>/features/*.md<path>/*.mdoverview.md, README.md, index.md, and any file that is clearly not a feature spec (e.g., changelog, license).md files found: display error "No feature specs found in {path}" with the paths tried, then stopAskUserQuestion to let user select:
Feature specs found in {path}:
1. acl-system
2. core-executor
3. schema-system
...
N. [Plan all — generate plans for all features sequentially]
.md), plus "Plan all" as the last option"Plan all" batch mode: When the user selects "Plan all":
batch_queuebatch_queue, execute Steps 3.2 through 13 sequentially (one complete plan per feature)Completed {n}/{total}: {feature_name}. Next: {next_feature_name}Batch planning complete
Features planned: {total}
{feature_1} — {task_count} tasks
{feature_2} — {task_count} tasks
...
Project overview: {output_dir}/overview.md
Next: /code-forge:impl {feature_name}
"Planning {N} features sequentially. For very large batches (10+), consider splitting into multiple /code-forge:plan invocations to avoid context exhaustion." Proceed regardless — this is informational only.Path resolution: Both relative and absolute paths are supported. Relative paths are resolved from the current working directory. External project paths (e.g., @../../other-project) are valid — the feature spec does not need to be inside the current project.
Perform these checks on the provided document:
{input_dir}/ and suggest corrections (check for typos).md, warn and ask whether to continue as plain textIf no document is provided and Step 2 was not triggered: display usage instructions with examples.
On any error: display the issue, suggest a fix, and stop.
Check whether <output_dir>/<feature_name>/ already exists:
Has state.json → Resume mode: show progress summary (task statuses), ask via AskUserQuestion:
Directory exists but no state.json → Conflict mode: warn about existing files, ask:
.backup/ then regenerateOffload to sub-agent to keep the full document content out of the main context.
Spawn an Agent tool call with:
subagent_type: "general-purpose"description: "Analyze feature document"Sub-agent prompt must include:
reference_summaries is non-empty (from Step 1), include a ## Reference Context section:
## Reference Context
The following project documents provide architectural context.
Use these to align your analysis with existing project decisions and patterns.
{reference_summaries — all summaries concatenated, separated by blank lines}
Sub-agent must analyze and return:
.md extension). Always use the filename, never the document title. Example: source file security.md → feature name security, even if the document title is "Security Manager".Main context retains: Only the structured summary returned by the sub-agent (~1-2KB). The full document content stays in the sub-agent's context and is discarded.
Important: Store the returned summary for use in Steps 5 and 7.
This step is mandatory. It is the primary defense against incremental bloat. Before any task is generated, the planner MUST verify what already exists in the project that could be reused or extended, so the resulting plan biases toward "extend existing" instead of "add new".
Skipping this step is the most common way that skill-driven planning produces parallel implementations, duplicate utilities, and bloated codebases over time.
Offload to a sub-agent to keep grep output and file reads out of the main context.
Spawn an Agent tool call with:
subagent_type: "general-purpose"description: "Discover reusable code for {feature_name}"Sub-agent prompt must include:
Sub-agent must do all of the following and return a structured report:
Component-by-component reuse search. For each entry in "Key Components" from the Step 4 summary:
UserAuthService, search for auth, login, session, credential, User.*Service, etc.)REUSE (existing code already does this — do not build it), EXTEND (existing code is close — modify it), NEW (genuinely no overlap — build new)Utility / helper survey. Scan the project for existing utility modules (utils/, lib/, helpers/, common/, shared modules) and list utilities relevant to the planned work. Future tasks must prefer these over reimplementing.
Configuration / constants survey. Identify existing configuration files, constants, enums, and environment variables relevant to the feature so the plan extends them rather than introducing parallel knobs.
Test scaffolding survey. Identify existing test fixtures, factories, mocks, and helper modules that new tests should reuse.
Anti-duplication callouts. Explicitly call out any place where the planned feature, as described in the Step 4 summary, would naively duplicate something that already exists. The plan generator (Step 7) must address each callout.
Deletion candidates. While searching, note any existing dead code, stale TODOs, or obsolete helpers in the touched areas. The plan should optionally include a cleanup task.
Sub-agent must return (as response text) this exact structured format:
REUSE_REPORT for {feature_name}
COMPONENT_DECISIONS:
- component: {planned_component_name}
decision: REUSE | EXTEND | NEW
existing: {file:line if reuse/extend, or "none" if new}
rationale: {one sentence}
EXISTING_UTILITIES:
- {file:symbol} — {what it does, why it's relevant}
EXISTING_CONFIG:
- {file:key} — {what it controls}
EXISTING_TEST_SCAFFOLDING:
- {file:symbol} — {fixture/mock/factory description}
ANTI_DUPLICATION_CALLOUTS:
- {planned thing} would duplicate {existing thing at file:line} — {how to avoid}
DELETION_CANDIDATES:
- {file:line} — {dead code / stale TODO description}
NEW_CODE_BUDGET:
- expected_new_files: {N}
- expected_extended_files: {N}
- justification: {one sentence}
Main context retains: Only this report (~2-3KB). Store it as reuse_report for use in Steps 7 and 8.
Hard rule for Steps 7 and 8: The plan and task files generated downstream MUST reference the reuse_report:
REUSE becomes a "use existing X" note in the plan, NOT a build task.EXTEND becomes a task that names the existing file to modify.ANTI_DUPLICATION_CALLOUT must be addressed in the plan (either by following the callout or by explicitly justifying why a parallel implementation is necessary).EXISTING_UTILITY, EXISTING_CONFIG, and EXISTING_TEST_SCAFFOLDING entry must be referenced from at least one task that would otherwise have reinvented it.DELETION_CANDIDATES is non-empty, the plan should include a cleanup task (or fold the deletions into adjacent tasks).NEW_CODE_BUDGET becomes the expected upper bound for the implementation. Step 7 should not generate a plan that wildly exceeds it without explicit justification.Pass reuse_report into the Step 7 and Step 8 sub-agent prompts as a section titled ## Existing Code to Reuse (MANDATORY) followed by the full report. The sub-agents must be told: "You are forbidden from generating tasks that recreate anything listed in this report. If a task seems to need such a thing, the task must instead reference the existing symbol."
If not clearly specified in the document, use a single AskUserQuestion combining up to 3 questions. Skip any question already answered by the document:
Question 1: Technology Stack Confirmation
Question 2: Testing Strategy
Question 3: Task Granularity
Extract feature name from the source filename (convert to kebab-case, strip .md extension). Always use the filename — never derive the feature name from the document title, as titles may differ from filenames.
Before creating any directory or file, run these assertions. If ANY fails, STOP immediately — do not proceed.
ASSERT: resolved output path does NOT contain "docs/plan"
ASSERT: resolved output path does NOT contain "docs/plans"
ASSERT: resolved output path does NOT contain "docs/planning"
ASSERT: resolved output path equals "{project_root}/{output_dir}/{feature_name}/"
where {output_dir} is the value resolved in Step 0 (default: "planning/")
ASSERT: feature_name does NOT contain numeric prefixes (e.g., "FE-01-", "01-")
ASSERT: feature_name is kebab-case and matches the source document filename
(e.g., source "core-dispatcher.md" → feature_name "core-dispatcher")
On assertion failure: display the violation and the correct path, then stop. Example:
OUTPUT PATH VIOLATION: about to write to "docs/plans/FE-01-core-dispatcher.md"
Expected: "planning/core-dispatcher/"
Fix: use the resolved output_dir from Step 0 configuration
Output directory: {output_dir} defaults to planning/ — NEVER docs/plan/, docs/plans/, docs/planning/, or any other invented path. If you are about to write to any path other than {output_dir}/{feature_name}/, STOP — you are making a mistake. Always use the resolved output_dir from Step 0 configuration.
Create directory structure and proceed directly — no confirmation needed:
{output_dir}/{feature_name}/
├── overview.md
├── plan.md
├── tasks/
└── state.json
Example with defaults: planning/user-auth/, planning/user-auth/tasks/, etc.
Offload to sub-agent to keep plan generation output out of the main context.
Spawn an Agent tool call with:
subagent_type: "general-purpose"description: "Generate implementation plan"Sub-agent prompt must include:
reuse_report from Step 4.5 — paste it verbatim under a ## Existing Code to Reuse (MANDATORY) section, followed by this instruction: "You are forbidden from generating tasks that recreate anything listed in this report. Components marked REUSE become 'use existing X' notes, not build tasks. Components marked EXTEND become tasks that name the existing file to modify. Every ANTI_DUPLICATION_CALLOUT must be addressed. Every EXISTING_UTILITY / EXISTING_CONFIG / EXISTING_TEST_SCAFFOLDING entry must be referenced from at least one task that would otherwise reinvent it. If DELETION_CANDIDATES is non-empty, include a cleanup task. Stay within the NEW_CODE_BUDGET unless you explicitly justify exceeding it."@../shared/design-first.md under a ## Design Discipline (MANDATORY) section, followed by this instruction: "Generated tasks MUST be shaped by design-first principles. Prefer 'modify existing X' over 'create new Y' wherever the reuse report or your own analysis indicates an existing structure can absorb the change. Tasks that create new files must justify why an existing file cannot host the change. Tasks that introduce new abstractions (base classes, interfaces, plugin systems, factories) must name at least two concrete callers — speculative abstraction is forbidden. Public interfaces stay stable unless the source document explicitly authorizes a break."{output_dir}/{feature_name}/plan.mdreference_summaries is non-empty, include a ## Reference Context section:
## Reference Context
The following project documents provide architectural context.
Ensure the implementation plan is consistent with existing architecture and conventions.
{reference_summaries — all summaries concatenated, separated by blank lines}
Sub-agent must write plan.md with these required sections:
graph TD) + task list with estimated time and dependenciesTask ID naming rules (critical): Task IDs must be descriptive names without numeric prefixes. Use setup, models, api — NOT 01-setup, 02-models, 03-api. Execution order is controlled by overview.md and state.json, not by filename ordering or numeric prefixes.
Sub-agent must return (as response text, separate from the file it writes) a concise task list summary:
TASK_COUNT: <number>
TASKS:
- <task_id>: <task_title> [depends on: <deps or "none">] (~<estimated_time>)
- <task_id>: <task_title> [depends on: <deps or "none">] (~<estimated_time>)
...
EXECUTION_ORDER: <task_id_1>, <task_id_2>, ...
Main context retains: Only the task list summary (~1-2KB). The full plan content is on disk.
Offload to sub-agent to keep task file generation out of the main context.
Spawn an Agent tool call with:
subagent_type: "general-purpose"description: "Generate task breakdown files"Sub-agent prompt must include:
{output_dir}/{feature_name}/plan.md (sub-agent reads it from disk)reuse_report from Step 4.5 — paste it verbatim under a ## Existing Code to Reuse (MANDATORY) section. Instruct: "For every task you generate, the 'Files Involved' section must prefer existing files over new ones whenever the reuse report indicates an existing equivalent. Each task's 'Steps' section must explicitly reference the relevant entries from EXISTING_UTILITIES, EXISTING_CONFIG, or EXISTING_TEST_SCAFFOLDING when applicable. Tasks that touch areas containing DELETION_CANDIDATES should fold the deletions in."{output_dir}/{feature_name}/tasks/reference_summaries is non-empty, include a ## Reference Context section:
## Reference Context
The following project documents provide architectural context.
Ensure task steps follow project conventions and integrate with existing components.
{reference_summaries — all summaries concatenated, separated by blank lines}
Sub-agent must create tasks/{name}.md for each task, following these principles:
Each task file must include:
Naming (critical): Use descriptive filenames: setup.md, models.md, api.md — NO numeric prefixes (01-setup.md, 02-models.md are WRONG). Execution order is defined in overview.md Task Execution Order table and state.json execution_order array, never in filenames.
Sub-agent must return (as response text) the list of generated files:
GENERATED_FILES:
- tasks/<task_id>.md: <task_title>
- tasks/<task_id>.md: <task_title>
...
Main context retains: Only the file list (~0.5KB). All task file content is on disk.
Execution order: This step executes AFTER Steps 7 and 8. Use the task list summary returned by the Step 7 sub-agent and the file list returned by the Step 8 sub-agent to populate task-related sections.
Generate feature overview with these required sections:
./tasks/), Description, StatusCreate state.json with these required fields:
| Field | Description |
|---|---|
feature | Feature name (string) |
created, updated | ISO timestamps |
status | "pending" initially |
execution_order | Array of task IDs in execution order |
progress | { total_tasks, completed, in_progress, pending } |
tasks | Array of task objects (see below) |
metadata | { source_doc, created_by: "code-forge", version: "1.0" } |
Each task object in the tasks array:
| Field | Description |
|---|---|
id | Task identifier (matches filename without .md) |
file | Relative path: tasks/{id}.md |
title | Human-readable task title |
status | "pending" initially |
started_at, completed_at | ISO timestamps or null |
assignee | null initially |
commits | Empty array [] initially |
After initializing state.json, generate or update {output_dir}/overview.md — a bird's-eye view of all features.
@../shared/overview-generation.md
Display: Project overview updated: {output_dir}/overview.md
Mandatory — do NOT proceed to Step 13 until all checks pass. Fix failures before continuing.
{output_dir}/{feature_name}/ existsplan.md exists and non-emptytasks/ contains .md files with descriptive names (no numeric prefixes)overview.md exists and non-emptystate.json is valid JSON with fields: feature, status, execution_order, progress, tasksstate.json matches files in tasks/{output_dir}/overview.md (project-level) existsdocs/plan/, docs/plans/, docs/planning/ — move if foundOutput plan summary:
Implementation plan generated
Location: {output_dir}/{feature_name}/
Total Tasks: {count}
Estimated Total Time: {estimate}
Task Overview:
{id} - {title} [{status}]
...
Next steps:
/code-forge:impl {feature_name} Execute tasks (TDD)
/code-forge:status {feature_name} View progress
Optional (before implementation):
/spec-forge:test-cases {feature_name} Generate test cases first
/code-forge:tdd @docs/{feature_name}/test-cases.md Implement from test cases
Optionally synchronize tasks to Claude Code's Task system:
execution_order, call TaskCreate with:
subject: "<task_id>: <task_title>"description: contents of the task fileactiveForm: "Implementing <task_title>"/code-forge:plan @docs/features/{feature}.md/code-forge:plan @docs/{feature}/tech-design.md/code-forge:plan @docs/features/{feature}.md/code-forge:impl {feature} to execute/code-forge:review {feature} to review.code-forge.json to Git for team visibilitystate.json can be optionally committed or added to .gitignoreoverview.md in {output_dir}/ is auto-generated and shows all features, dependencies, and recommended implementation order.code-forge.json contains a _tool section with the plugin URL — new team members can find and install the tool from therepending, in_progress, completed, blocked, skippeddocs/
└── features/ # Input: feature specs (owned by spec-forge)
└── user-auth.md # Generated by /spec-forge:feature or extracted from tech-design
planning/ # Output: implementation plans (owned by code-forge)
├── overview.md # Project-level overview (auto-generated)
└── {feature}/ # Per-feature directory
├── overview.md # Feature overview + task execution order
├── plan.md # Implementation plan
├── tasks/ # Task breakdown files
└── state.json # Status tracking
This structure is mandatory, not a suggestion. Every file listed above must exist after plan generation completes.user-auth). Task files use descriptive names (setup.md). No "claude-" or tool prefixes. Suitable for Git commits.reference_docs.sources in .code-forge.json to auto-discover project documentation. Each doc is summarized by a parallel sub-agent and injected as context into Steps 4, 7, and 8. Reference context is baked into generated plan.md and task files — downstream skills do not re-read reference docs.overview.md + plan.md + tasks/*.md + state.json)docs/plan/, docs/plans/, or docs/planning/ instead of {output_dir}plan.md instead of separate tasks/{name}.md files01-setup.md instead of setup.md)FE-01-core-dispatcher instead of core-dispatcher)@ as a prompt instead of asking the user (Step 2.0 guard)state.json — downstream skills (impl, status, fix, finish) cannot operate without itoverview.md (Step 11)Agent toolreuse_report instead of extending the existing ones