From arcforge
Converts structured spec.xml into executable dag.yaml and epics directory for sprint planning, epic/feature breakdown, and implementation structure.
npx claudepluginhub gregoryho/arcforge --plugin arcforgeThis skill uses the workspace's default tool permissions.
**PLANNER IS A PURE FUNCTION. DAG IS DISPOSABLE.**
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
PLANNER IS A PURE FUNCTION. DAG IS DISPOSABLE.
No state preservation. No archive. No gate. No reading the design doc. Overwrite dag.yaml every sprint — git history is the only retroactive trace. If you find yourself wanting to add state, an archive file, or a completion check, stop and surface the underlying need to the user instead.
REQUIRED BACKGROUND: Read ${ARCFORGE_ROOT}/scripts/lib/sdd-schemas/spec.md before building any dag.yaml — you need to know the <delta> element structure (multi-delta accumulation, four child types with epic semantics) to correctly extract sprint scope from the current spec_version's delta.
Convert a spec into an executable DAG with epic/feature breakdown. The DAG is a derived view, rebuilt from scratch each sprint, never archived:
(spec + delta) → (dag.yaml + epics/)
The DAG is disposable per sprint — historical traceability lives in the spec's accumulated <delta> elements and in docs/plans/<spec-id>/<iteration>/design.md folders, not in archived DAGs.
R2 Unidirectional: Planner MUST NOT write to specs/<spec-id>/spec.xml or specs/<spec-id>/details/. Its only output paths are specs/<spec-id>/dag.yaml and specs/<spec-id>/epics/.
Three-Layer Rule: Planner MUST NOT read the design doc. It works from the spec only. The spec's <delta> metadata provides planning scope, making design doc access unnecessary (three-layer model: design doc → spec → DAG).
No gate here. The DAG completion gate that prevents iterating on an incomplete sprint lives in arc-refining, not here. By the time the planner runs, the refiner has already certified the prior sprint is complete (or this is v1). Planner trusts that and overwrites.
/arc-refining first)If the user has not provided a spec-id, scan specs/ to present available targets and ask the user to choose.
Once you have the spec-id, all inputs come from specs/<spec-id>/spec.xml and the specs/<spec-id>/details/ directory.
Validate the spec programmatically using sdd-utils, and extract the current sprint's scope from the latest <delta>:
node -e "
const fs = require('fs');
const { parseSpecHeader, validateSpecHeader } = require('${ARCFORGE_ROOT}/scripts/lib/sdd-utils');
const xml = fs.readFileSync('specs/<spec-id>/spec.xml', 'utf-8');
const parsed = parseSpecHeader(xml);
const result = validateSpecHeader(parsed);
console.log(JSON.stringify(result, null, 2));
if (parsed && parsed.latest_delta) {
const d = parsed.latest_delta;
console.log('Sprint version:', d.version, 'iteration:', d.iteration);
console.log('Added (implement epics):', d.added.map(x => x.ref));
console.log('Modified (update epics):', d.modified.map(x => x.ref));
console.log('Removed (teardown epics):', d.removed.map(x => x.ref));
console.log('Renamed (mechanical refactor epics):', d.renamed.map(x => x.ref_old + '→' + x.ref_new));
} else if (parsed) {
console.log('No delta — v1 spec. Plan all requirements in detail files.');
}
"
valid is false and any issue has level: 'ERROR' — BLOCK. Remediation: "Run refiner to produce a spec first." Do not proceed.valid is false with only WARNINGs (e.g., broken design_path) — proceed but surface the warnings.valid is true — proceed.The scope-extraction snippet uses parsed.latest_delta (the highest-version delta — equivalent to the last child of <overview>). Earlier <delta> elements are historical record of prior sprints; the planner ignores them.
The DAG is rebuilt from scratch each sprint. Scope depends on whether a <delta> element exists in spec.xml:
<overview>)Plan all requirements from all detail files in specs/<spec-id>/details/. Every <requirement> becomes a feature.
<delta> elements)Read parsed.latest_delta — the delta whose version equals the current spec_version. Every child of that delta generates exactly one epic:
| Delta child | Epic semantics | source_requirement |
|---|---|---|
<added ref="X"> | Implement new requirement X | X (new in current detail files) |
<modified ref="X"> | Update existing implementation of X to match changed behavior | X (still in current detail files, definition changed) |
<removed ref="X"><reason>...</reason></removed> | Teardown epic. Implementer LLM greps the codebase for X and removes tied code. The <reason> and optional <migration> from the delta inform teardown approach (security removal → strict; deprecation with consumers → leave shim). X no longer exists in current detail files; the epic references it as a removed id. | X (removed — flag the epic as a teardown epic so implementer skips spec lookup and works from delta context) |
<renamed ref_old="X" ref_new="Y"> | Mechanical refactor epic. Grep + replace refs from X to Y across the codebase. Body unchanged — semantic changes are forbidden in <renamed>. | Y (the new id; Y exists in current detail files) |
A <delta> containing only <removed> children — a deprecation sprint, compliance teardown, or legacy cleanup — is a legitimate sprint. The planner does NOT inspect the shape of a delta (no "must contain at least one <added>" check). It enforces per-child correctness only. Emit teardown epics and proceed.
| Spec Level | Planner Level | Ratio |
|---|---|---|
<detail> | Epic | 1:1 (large detail may split into multiple epics) |
<requirement> | Feature | 1:1 strict |
<dependency ref> | depends_on | Auto-derive |
Each <requirement> maps to exactly one feature. The feature's source_requirement field MUST reference the spec requirement ID (or, for <removed> epics, the removed-id from the delta).
Build the complete dag.yaml and all epics/ in memory before writing any file to disk. Build → validate → write only if valid.
specs/<spec-id>/
├── dag.yaml # Epic/Feature DAG
└── epics/
├── epic-auth/
│ ├── epic.md # Epic overview: title, description, feature list
│ └── features/
│ ├── auth-login.md
│ └── auth-register.md
└── epic-api/
└── ...
# Feature: auth-login
## Source
- Requirement: FR-AUTH-001
- Detail: authentication.xml
## Dependencies
- auth-schema (must complete first)
## Acceptance Criteria
- [ ] POST /login accepts {email, password}
- [ ] Returns 200 + JWT on valid credentials
- [ ] Returns 401 on invalid credentials
If specs/<spec-id>/dag.yaml already exists, planner MUST overwrite it. Planner MUST NOT write any archive sibling file (no date-suffixed copy, no .bak, no archive/ subdirectory) and MUST NOT move the previous dag.yaml to a backup location with mv. Previous epic statuses MUST NOT carry over — every epic in the new DAG starts in "pending". The git history of dag.yaml is the only retroactive trace of prior DAGs; arcforge does not treat git as part of its contract but does not prevent inspection.
Set SKILL_ROOT from skill loader header (# SKILL_ROOT: ...):
: "${SKILL_ROOT:=${ARCFORGE_ROOT:-}/skills/arc-planning}"
if [ ! -d "$SKILL_ROOT" ]; then
echo "ERROR: SKILL_ROOT=$SKILL_ROOT does not exist. Set ARCFORGE_ROOT or SKILL_ROOT manually." >&2
exit 1
fi
To view the full schema and example, run:
# View schema with field descriptions
node "${SKILL_ROOT}/scripts/planner.js" schema
# View complete example
node "${SKILL_ROOT}/scripts/planner.js" schema --example
# View as JSON (for programmatic use)
node "${SKILL_ROOT}/scripts/planner.js" schema --json
Example dag.yaml:
epics:
- id: "epic-auth"
name: "Authentication System"
status: "pending"
spec_path: "specs/<spec-id>/epics/epic-auth/epic.md"
worktree: null
depends_on: []
features:
- id: "auth-login"
name: "User Login"
status: "pending"
source_requirement: "FR-AUTH-001"
depends_on: []
- id: "auth-logout"
name: "User Logout"
status: "pending"
source_requirement: "FR-AUTH-002"
depends_on: ["auth-login"]
All epics start in "pending" status. Previous statuses MUST NOT carry over — the DAG is always built fresh.
Before writing to disk, validate the in-memory DAG:
<detail> covered by the sprint scope maps to ≥1 epicsource_requirementid, status, source_requirement per featuredepends_on references point to existing epic/feature IDs within the DAGsource_requirement values either correspond to real requirement IDs in specs/<spec-id>/details/ (added/modified/renamed cases) or reference an id from the delta's <removed> (teardown case)If validation finds ERRORs, report all findings with remediation and do not write any files.
A planning round is done when all epics in specs/<spec-id>/dag.yaml are in "completed" status. This means the current sprint is fully implemented. The next refiner run will see all epics completed and unblock the next iteration. The next planner run will overwrite this DAG without preserving any prior state.
After writing files:
git add specs/<spec-id>/dag.yaml specs/<spec-id>/epics/
git commit -m "docs: plan epics and features for <spec-id>"
Circular dependency = STOP, ask user. Cycles must be resolved by the user, not guessed.
Hand off to /arc-coordinating (multi-epic projects requiring worktree isolation) or /arc-implementing (single-epic or straightforward implementation).
✅ Planner complete
<spec-id><N> (added: N, modified: N, removed: N, renamed: N) | all requirements (v1)specs/<spec-id>/dag.yaml + epics/ (committed)/arc-coordinating or /arc-implementing⚠️ Planner blocked
<spec-id>Note: planner does not block on incomplete prior sprints. That gate lives in arc-refining (per fr-rf-012). If you find yourself wanting to add a completion gate here, instead fix the refiner — it should never have allowed iteration to v(N+1) while v(N)'s sprint was still running.
Cycles must be resolved by user, not guessed. Planner reads spec only. No archive. No gate.