From forge
Generates concrete spec files from ideas, codebases, or docs with R-numbered requirements, testable acceptance criteria, complexity detection, and approval gates.
npx claudepluginhub lucasduys/forge --plugin forgeThis skill uses the workspace's default tool permissions.
You are running the Forge brainstorming workflow. Your job is to turn a user's idea into one or more concrete specification files with R-numbered requirements and testable acceptance criteria.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides agent creation for Claude Code plugins with file templates, frontmatter specs (name, description, model), triggering examples, system prompts, and best practices.
You are running the Forge brainstorming workflow. Your job is to turn a user's idea into one or more concrete specification files with R-numbered requirements and testable acceptance criteria.
The user provides one of:
--from-code flag (analyze the existing codebase to generate specs)--from-docs PATH flag (read documents from PATH, extract requirements)Before asking questions, determine the scope of the user's request.
Analyze the topic (or codebase/documents) and classify complexity:
| Level | Signals | Question Count |
|---|---|---|
| Simple (single feature, few files, clear scope) | Bug fix, small enhancement, single endpoint, UI tweak | 3-5 questions |
| Medium (multi-component, new feature with defined scope) | Multiple files across directories, needs tests, some cross-component work | 8-12 questions |
| Complex (multi-domain, architectural decisions, cross-repo) | New system/subsystem, security-sensitive, unfamiliar tech, multi-repo | Decompose into sub-projects first, then 8-12 questions per sub-project |
Scoring heuristics (each signal adds 1 point):
Score 0-3 = Simple, 4-7 = Medium, 8+ = Complex.
Tell the user the detected complexity and how many questions you will ask. Example: "This looks like a medium complexity project. I'll ask about 8-10 questions to nail down the spec."
The brainstorming workflow MUST NOT write a spec file with status: approved until the user has EXPLICITLY approved an approach. This is the primary enforcement mechanism that prevents the Forge pipeline from being bypassed.
The approval gate works as follows:
status: approvedWhat counts as explicit approval:
What does NOT count as approval:
If the user tries to skip brainstorming (e.g., "just do it", "skip the questions"), respond:
The Forge workflow requires a spec with approved requirements before implementation. This prevents wasted work and scope creep. I'll keep the questions brief -- let me ask the most critical ones.
Then ask at minimum 3 questions before proposing approaches.
Rules — follow these strictly:
How should authentication work? A) JWT tokens (stateless, good for APIs) B) Session cookies (simpler, good for web apps) C) OAuth2 with external provider (Google, GitHub, etc.) D) Something else (describe)
Question progression for a typical project:
For projects with UI components, add a design system question:
Does this project have a design system or brand guidelines? A) Yes, I have a DESIGN.md file (specify path) B) I want to base it on an existing brand (e.g., Stripe, Linear, Claude) C) No specific design requirements -- use sensible defaults D) I'll provide design specs later
If option B, generate a DESIGN.md using the template from skills/design-system/SKILL.md and write it to the project root. Reference the awesome-design-md catalog (github.com/VoltAgent/awesome-design-md) for established brand design systems.
If graphify-out/graph.json exists, load it before proposing approaches:
After gathering requirements, propose 2-3 approaches with clear trade-offs:
## Approach A: [Name]
**Summary:** [1-2 sentences]
**Pros:** [bullet list]
**Cons:** [bullet list]
**Best when:** [scenario]
**Estimated complexity:** [simple/medium/complex]
## Approach B: [Name]
...
## Recommendation
I recommend **Approach [X]** because [reasoning].
Wait for the user to pick an approach (or ask for a hybrid). Do NOT proceed until they explicitly approve.
Once the user approves an approach, write the spec file.
Output location: .forge/specs/spec-{domain}.md
The domain name should be a short, lowercase, hyphenated slug derived from the project topic (e.g., auth, task-api, billing, notification-system).
Output format — use this template exactly:
---
domain: {domain}
status: approved
created: {YYYY-MM-DD}
complexity: {simple|medium|complex}
linked_repos: [{repo names if multi-repo, otherwise empty}]
design: {path to DESIGN.md if UI project, otherwise omit}
---
# {Domain Title} Spec
## Overview
{Brief description of this domain, its purpose, and the chosen approach.}
## Requirements
### R001: {Requirement Name}
{Description of the requirement.}
**Acceptance Criteria:**
- [ ] {Specific, testable criterion 1}
- [ ] {Specific, testable criterion 2}
- [ ] {Specific, testable criterion 3}
### R002: {Next Requirement}
...
Rules for writing requirements:
If the project is complex and needs multiple specs:
auth, api, frontend, infra)If .forge/config.json does not exist or needs updating:
repos if multi-repo was discusseddepth based on detected complexity (simple->quick, medium->standard, complex->thorough)After writing all spec files, tell the user:
Spec written to
.forge/specs/spec-{domain}.mdwith {N} requirements.Next step: Run
/forge planto decompose these requirements into an ordered task frontier.
If multiple specs were written, list them all.