Analyze documentation (or a prompt) and generate an implementation plan with task breakdown, TDD steps, and progress tracking.
Generates structured implementation plans with task breakdowns, TDD steps, and progress tracking from feature documents or prompts.
npx claudepluginhub tercel/code-forgeThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Generate an implementation plan from a feature document or a requirement prompt.
ALL OUTPUT GOES INTO {output_dir}/{feature-name}/ AS SEPARATE FILES — overview.md, plan.md, tasks/*.md, state.json.
| Thought | Reality |
|---|---|
| "I'll put everything in one plan.md for simplicity" | Multi-file structure is how impl/status/review find individual tasks. One file breaks all downstream skills. |
| "docs/plan is close enough" | Output dir is {output_dir} (default: planning/). docs/plan, docs/plans are ALL wrong. |
| "I'll create the tasks inline in plan.md" | Tasks go in tasks/{name}.md as separate files. Step 7 sub-agent creates them. |
| "Numeric prefixes help with ordering" | Execution order is in overview.md and state.json. Files are setup.md, not 01-setup.md. |
| "I can skip state.json" | state.json drives impl, status, fixbug. Without it, no downstream skill works. |
| "The overview files are optional" | Both project-level and feature-level overview.md are mandatory outputs. |
Input (Document or Prompt) → Analysis → Planning → Task Breakdown → Status Tracking
Steps 2, 6, and 7 are offloaded to sub-agents via the Task tool to prevent context window exhaustion on large projects. The main context retains only concise summaries returned by each sub-agent, while full document analysis, file generation, and code implementation happen in isolated sub-agent contexts that are discarded after completion.
Actual execution order: 0 → 0.9 (reference docs, if configured) → 0.8 (prompt mode only) → 1 → 2 (sub-agent) → 3 → 4 → 6 (sub-agent) → 7 (sub-agent) → 5 → 8 → 8.5 → 9
Step 5 (overview.md) executes after Steps 6 and 7 because it references task files generated by those steps.
@../shared/configuration.md
Plan-specific additions to Step 0:
reference_docs.sources = [], reference_docs.exclude = []reference_docs.sources must be an array of strings (fall back to [] on error)reference_docs.sources entries must NOT contain .. (security risk)reference_docs.sources entries must NOT point to system directories (node_modules/, .git/, build/)reference_docs.exclude must be an array of strings (fall back to [] on error){output_dir}/{feature_name}/base_dir empty string means project root; input_dir default: docs/features/; output_dir default: planning/This step only runs when reference_docs.sources is non-empty in the merged configuration.
If reference_docs.sources is empty or not configured, skip directly to Step 0.8.
config.reference_docs.sources against project_rootconfig.reference_docs.exclude patterns to filter results{output_dir}/** to prevent circular referencesReference docs: 0 files matched for configured patterns. Continuing without reference context. → skip to Step 0.8AskUserQuestion: "Found {N} reference docs. This will spawn {N} parallel sub-agents."
sources/exclude config, stop and let user update .code-forge.jsonDisplay the matched file list:
Reference docs: {count} files matched
{path_1}
{path_2}
...
Proceed directly — no confirmation needed (unless > 30 files triggered 0.9.1 step 6).
Spawn N parallel sub-agents via Task tool, one per matched file:
subagent_type: "general-purpose"description: "Summarize reference doc: {filename}"Each sub-agent prompt:
DOC_PATH: {file_path}
DOC_TYPE: <architecture | api | requirements | conventions | data-model | other>
SUMMARY: <2-3 sentence summary of what this document describes>
KEY_DECISIONS: <bulleted list of important technical decisions, constraints, or patterns>
RELEVANCE_TAGS: <comma-separated keywords for matching against feature docs>
Target summary size: ~300-500 bytes per doc.
Error handling: If a sub-agent fails to summarize a file, log a warning and skip that file:
Warning: Failed to summarize {path} — skipping
Reference docs: {success_count} of {total_count} files summarized successfully
Collect all successful sub-agent results into a reference_summaries list (ordered by file path). Store in memory for use by Steps 2, 6, and 7.
After the input document path is known (after Step 1), remove it from reference_summaries if present — the feature doc is already read directly by Steps 2 and 6. This deduplication happens lazily: the summaries are stored now, deduplication is applied when injecting into sub-agent prompts.
This step only runs when the input is NOT a file path (does NOT start with @).
If the input starts with @, skip directly to Step 1.
When a user provides a text prompt instead of a file path, code-forge:plan delegates feature spec creation to spec-forge:feature. This maintains the separation of concerns: spec-forge owns specification, code-forge owns implementation planning.
Convert the prompt text to a kebab-case slug for the feature name:
user-login-feature)AskUserQuestion to let user confirm or provide a custom slug. Suggest a reasonable English slug based on the prompt meaning.Check if {input_dir}/{slug}.md already exists:
Invoke spec-forge:feature to generate the feature spec:
Launch Task(subagent_type="general-purpose"):
docs/features/{slug}.md existsIf spec-forge:feature is not available (skill not installed), fall back to generating a minimal feature document directly:
# {Feature Title}
> Feature spec for code-forge implementation planning.
> Source: auto-generated from prompt
> Created: {date}
## Purpose
{user's original prompt text, verbatim}
## Notes
- Generated from prompt by code-forge (spec-forge:feature not available)
- Consider running `/spec-forge:feature {slug}` for a more detailed spec
Set {input_dir}/{slug}.md as the current input document path (prefixed with @), then continue to Step 1.
User should provide an @ path pointing to a file or directory:
# File mode — plan a single feature
/code-forge:plan @docs/features/user-auth.md
# Directory mode — list features and let user pick
/code-forge:plan @docs/features/
/code-forge:plan @../../aipartnerup/apcore
Note: Use configured path ({input_dir}/). Also accepts spec-forge tech-design files directly: /code-forge:plan @docs/user-auth/tech-design.md
If the @ path resolves to a directory (not a file):
<path>/docs/features/*.md<path>/features/*.md<path>/*.md.md files found: display error "No feature specs found in {path}" with the paths tried, then stopAskUserQuestion to let user select:
Feature specs found in {path}:
1. acl-system.md
2. core-executor.md
3. schema-system.md
...
.md)Path resolution: Both relative and absolute paths are supported. Relative paths are resolved from the current working directory. External project paths (e.g., @../../other-project) are valid — the feature spec does not need to be inside the current project.
Perform these checks on the provided document:
{input_dir}/ and suggest corrections (check for typos).md, warn and ask whether to continue as plain textIf no document is provided and Step 0.8 was not triggered: display usage instructions with examples.
On any error: display the issue, suggest a fix, and stop.
Check whether <output_dir>/<feature_name>/ already exists:
Has state.json → Resume mode: show progress summary (task statuses), ask via AskUserQuestion:
Directory exists but no state.json → Conflict mode: warn about existing files, ask:
.backup/ then regenerateOffload to sub-agent to keep the full document content out of the main context.
Spawn a Task tool call with:
subagent_type: "general-purpose"description: "Analyze feature document"Sub-agent prompt must include:
reference_summaries is non-empty (from Step 0.9), include a ## Reference Context section:
## Reference Context
The following project documents provide architectural context.
Use these to align your analysis with existing project decisions and patterns.
{reference_summaries — all summaries concatenated, separated by blank lines}
Sub-agent must analyze and return:
Main context retains: Only the structured summary returned by the sub-agent (~1-2KB). The full document content stays in the sub-agent's context and is discarded.
Important: Store the returned summary for use in Steps 3 and 6.
If not clearly specified in the document, use a single AskUserQuestion combining up to 3 questions. Skip any question already answered by the document:
Question 1: Technology Stack Confirmation
Question 2: Testing Strategy
Question 3: Task Granularity
Extract feature name from filename or document title (convert to kebab-case).
Output directory: {output_dir} defaults to planning/ — NEVER docs/plan/, docs/plans/, docs/planning/, or any other invented path. If you are about to write to any path other than {output_dir}/{feature_name}/, STOP — you are making a mistake. Always use the resolved output_dir from Step 0 configuration.
Create directory structure and proceed directly — no confirmation needed:
{output_dir}/{feature_name}/
├── overview.md
├── plan.md
├── tasks/
└── state.json
Example with defaults: planning/user-auth/, planning/user-auth/tasks/, etc.
Offload to sub-agent to keep plan generation output out of the main context.
Spawn a Task tool call with:
subagent_type: "general-purpose"description: "Generate implementation plan"Sub-agent prompt must include:
{output_dir}/{feature_name}/plan.mdreference_summaries is non-empty, include a ## Reference Context section:
## Reference Context
The following project documents provide architectural context.
Ensure the implementation plan is consistent with existing architecture and conventions.
{reference_summaries — all summaries concatenated, separated by blank lines}
Sub-agent must write plan.md with these required sections:
graph TD) + task list with estimated time and dependenciesTask ID naming rules (critical): Task IDs must be descriptive names without numeric prefixes. Use setup, models, api — NOT 01-setup, 02-models, 03-api. Execution order is controlled by overview.md and state.json, not by filename ordering or numeric prefixes.
Sub-agent must return (as response text, separate from the file it writes) a concise task list summary:
TASK_COUNT: <number>
TASKS:
- <task_id>: <task_title> [depends on: <deps or "none">] (~<estimated_time>)
- <task_id>: <task_title> [depends on: <deps or "none">] (~<estimated_time>)
...
EXECUTION_ORDER: <task_id_1>, <task_id_2>, ...
Main context retains: Only the task list summary (~1-2KB). The full plan content is on disk.
Offload to sub-agent to keep task file generation out of the main context.
Spawn a Task tool call with:
subagent_type: "general-purpose"description: "Generate task breakdown files"Sub-agent prompt must include:
{output_dir}/{feature_name}/plan.md (sub-agent reads it from disk){output_dir}/{feature_name}/tasks/reference_summaries is non-empty, include a ## Reference Context section:
## Reference Context
The following project documents provide architectural context.
Ensure task steps follow project conventions and integrate with existing components.
{reference_summaries — all summaries concatenated, separated by blank lines}
Sub-agent must create tasks/{name}.md for each task, following these principles:
Each task file must include:
Naming (critical): Use descriptive filenames: setup.md, models.md, api.md — NO numeric prefixes (01-setup.md, 02-models.md are WRONG). Execution order is defined in overview.md Task Execution Order table and state.json execution_order array, never in filenames.
Sub-agent must return (as response text) the list of generated files:
GENERATED_FILES:
- tasks/<task_id>.md: <task_title>
- tasks/<task_id>.md: <task_title>
...
Main context retains: Only the file list (~0.5KB). All task file content is on disk.
Execution order: This step executes AFTER Steps 6 and 7. Use the task list summary returned by the Step 6 sub-agent and the file list returned by the Step 7 sub-agent to populate task-related sections.
Generate feature overview with these required sections:
./tasks/), Description, StatusCreate state.json with these required fields:
| Field | Description |
|---|---|
feature | Feature name (string) |
created, updated | ISO timestamps |
status | "pending" initially |
execution_order | Array of task IDs in execution order |
progress | { total_tasks, completed, in_progress, pending } |
tasks | Array of task objects (see below) |
metadata | { source_doc, created_by: "code-forge", version: "1.0" } |
Each task object in the tasks array:
| Field | Description |
|---|---|
id | Task identifier (matches filename without .md) |
file | Relative path: tasks/{id}.md |
title | Human-readable task title |
status | "pending" initially |
started_at, completed_at | ISO timestamps or null |
assignee | null initially |
commits | Empty array [] initially |
After initializing state.json, generate or update {output_dir}/overview.md — a bird's-eye view of all features.
@../shared/overview-generation.md
Display: Project overview updated: {output_dir}/overview.md
Mandatory — do NOT proceed to Step 9 until all checks pass. Fix failures before continuing.
{output_dir}/{feature_name}/ existsplan.md exists and non-emptytasks/ contains .md files with descriptive names (no numeric prefixes)overview.md exists and non-emptystate.json is valid JSON with fields: feature, status, execution_order, progress, tasksstate.json matches files in tasks/{output_dir}/overview.md (project-level) existsdocs/plan/, docs/plans/, docs/planning/ — move if foundOutput plan summary:
Implementation plan generated
Location: {output_dir}/{feature_name}/
Total Tasks: {count}
Estimated Total Time: {estimate}
Task Overview:
{id} - {title} [{status}]
...
Next steps:
/code-forge:impl {feature_name} Execute tasks
/code-forge:status {feature_name} View progress
cat {output_dir}/{feature_name}/plan.md View detailed plan
Optionally synchronize tasks to Claude Code's Task system:
execution_order, call TaskCreate with:
subject: "<task_id>: <task_title>"description: contents of the task fileactiveForm: "Implementing <task_title>"/code-forge:plan @docs/features/{feature}.md/code-forge:plan @docs/{feature}/tech-design.md/code-forge:plan @docs/features/{feature}.md/code-forge:impl {feature} to execute/code-forge:review {feature} to review.code-forge.json to Git for team visibilitystate.json can be optionally committed or added to .gitignoreoverview.md in {output_dir}/ is auto-generated and shows all features, dependencies, and recommended implementation order.code-forge.json contains a _tool section with the plugin URL — new team members can find and install the tool from therepending, in_progress, completed, blocked, skippeddocs/
└── features/ # Input: feature specs (owned by spec-forge)
└── user-auth.md # Generated by /spec-forge:feature or extracted from tech-design
planning/ # Output: implementation plans (owned by code-forge)
├── overview.md # Project-level overview (auto-generated)
└── {feature}/ # Per-feature directory
├── overview.md # Feature overview + task execution order
├── plan.md # Implementation plan
├── tasks/ # Task breakdown files
└── state.json # Status tracking
This structure is mandatory, not a suggestion. Every file listed above must exist after plan generation completes.user-auth). Task files use descriptive names (setup.md). No "claude-" or tool prefixes. Suitable for Git commits.reference_docs.sources in .code-forge.json to auto-discover project documentation. Each doc is summarized by a parallel sub-agent and injected as context into Steps 2, 6, and 7. Reference context is baked into generated plan.md and task files — downstream skills do not re-read reference docs.overview.md + plan.md + tasks/*.md + state.json)docs/plan/, docs/plans/, or docs/planning/ instead of {output_dir}plan.md instead of separate tasks/{name}.md files01-setup.md instead of setup.md)state.json — downstream skills (impl, status, fixbug, finish) cannot operate without itoverview.md (Step 8.5)Task toolActivates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.