Execute the plan task by task with Forge. Sage assists with tests if TDD enabled.
From rpi-kitnpx claudepluginhub dmend3z/rpi-kit --plugin rpi-kit<feature-name> [--resume] [--force]rpi//implementImplements tasks from a Conductor track plan using strict TDD workflow (red-green-refactor). Auto-selects incomplete track if unspecified; accepts track-id, --task, --phase args.
/implementImplements features/code by activating specialist personas and MCP tools (Context7, Sequential, Magic, Playwright) for analysis, generation, security/QA validation, testing, and integration.
/implementImplements a feature from spec via structured workflow: understand requirements, create feature branch, design, code incrementally with tests, self-review, and PR. Outputs change summary.
/implementAutomates full Evaluate-Loop for a track: detects state, dispatches planner/evaluator/executor/fixer agents, loops on failures until complete. Optional track ID.
/implementStarts task execution loop for active spec: resolves/validates spec and tasks.md, parses iteration/recovery flags, initializes .ralph-state.json, runs tasks via coordinator until complete.
/implementStarts task execution loop for active feature: validates prereqs, commits specs via git, initializes execution state JSON, outputs coordinator prompt. Accepts --max-task-iterations and --recovery-mode.
Execute PLAN.md task by task. Forge implements each task with strict CONTEXT_READ discipline. If TDD is enabled, Sage writes failing tests before Forge implements.
.rpi.yaml for config. Apply defaults if missing:
folder: rpi/featurescontext_file: rpi/context.mdtdd: falsecommit_style: conventional$ARGUMENTS to extract {slug} and optional flags:
--resume: continue from last completed task (default behavior when IMPLEMENT.md exists)--force: restart implementation from scratch even if IMPLEMENT.md existsrpi/features/{slug}/plan/PLAN.md exists. If not:
PLAN.md not found for '{slug}'. Run /rpi:plan {slug} first.
Stop.rpi/features/{slug}/plan/PLAN.md — store as $PLAN.rpi/features/{slug}/plan/eng.md if it exists — store as $ENG.rpi/features/{slug}/DESIGN.md if it exists — store as $DESIGN.rpi/context.md (project context) if it exists — store as $CONTEXT.$PLAN to extract the ordered task list. Each task should have:
task_id: task number/identifierdescription: what to implementfiles: target files to create or modifydeps: dependencies on other tasks (must be completed first)--force was NOT passed:rpi/features/{slug}/implement/IMPLEMENT.md.- [x] are done, - [ ] are pending.Resuming '{slug}' from task {next_task_id}. ({completed}/{total} tasks done)
--force was passed:Proceed to Step 4.
rpi/features/{slug}/implement/rpi/features/{slug}/implement/IMPLEMENT.md with all tasks unchecked:# Implementation: {Feature Title}
Started: {YYYY-MM-DD}
Plan: rpi/features/{slug}/plan/PLAN.md
## Tasks
- [ ] Task {1}: {description}
- [ ] Task {2}: {description}
- ...
## Execution Log
For each task in PLAN.md order, respecting dependency ordering (a task's deps must all be [x] before it runs):
tdd: true in config)Launch Sage agent with this prompt:
You are Sage. Write failing tests for task {task_id} of feature: {slug}
## Task
{task description from PLAN.md}
## Target Files
{files listed for this task}
## Engineering Spec
{$ENG}
## Design Context
{$DESIGN}
## Project Context
{$CONTEXT}
Your task:
1. Read existing test files and test patterns in the project
2. Write tests that verify the expected behavior for this task
3. Tests MUST fail right now (the implementation doesn't exist yet)
4. Cover: happy path, error path, at least one edge case
5. Run the tests and confirm they fail
6. Output: test file path, test code, and the failing test output
After writing tests, append your activity to rpi/features/{slug}/ACTIVITY.md:
### {current_date} — Sage (Implement — TDD for Task {task_id})
- **Action:** Wrote failing tests for task {task_id}
- **Key decisions:** {for each <decision> tag you emitted: "summary (rationale)", separated by semicolons. If none: "No decisions in this phase."}
- **Tests written:** {count}
- **Edge cases covered:** {count}
- **Quality:** {your quality gate result}
Wait for Sage to complete. Store the test output as $SAGE_TESTS. Verify the tests actually fail — if they pass, something is wrong (the behavior may already exist). Inform the user and ask how to proceed.
Launch Forge agent with this prompt:
You are Forge. Implement task {task_id} for feature: {slug}
## Task
{task description from PLAN.md}
## Target Files
{files listed for this task}
## Dependencies Completed
{list of completed task IDs and their descriptions}
## Engineering Spec
{$ENG}
## Design Context
{$DESIGN}
## Project Context
{$CONTEXT}
## Tests to Pass
{$SAGE_TESTS if TDD enabled, otherwise "No TDD tests — follow the plan."}
CRITICAL RULES:
1. CONTEXT_READ: You MUST read ALL target files before writing ANY code
2. Match existing patterns — naming, error handling, imports, style
3. Only touch files listed in the task unless absolutely necessary
4. If TDD: make the failing tests pass
5. Commit your changes with a conventional commit message
6. Report: DONE | BLOCKED | DEVIATED
After completing the task, append your activity to rpi/features/{slug}/ACTIVITY.md:
### {current_date} — Forge (Implement — Task {task_id})
- **Action:** Implemented task {task_id} for {slug}
- **Key decisions:** {for each <decision> tag you emitted: "summary (rationale)", separated by semicolons. If none: "No decisions in this phase."}
- **Files changed:** {list}
- **Status:** {DONE|BLOCKED|DEVIATED}
- **Quality:** {your quality gate result}
Forge will respond with one of three statuses:
- [ ] Task {id} to - [x] Task {id} and append to Execution Log:
### Task {id}: {description}
- Status: DONE
- Commit: {hash}
- Files: {list of files changed}
### Task {id}: {description}
- Status: BLOCKED
- Reason: {blocker description from Forge}
Implementation blocked at task {id}: {description}
Blocker: {reason}
Options:
- Fix the blocker and run: /rpi:implement {slug} --resume
- Skip this task and continue: /rpi:implement {slug} (after manually marking task as skipped)
- Re-plan: /rpi:plan {slug} --force
### Task {id}: {description}
- Status: DONE (with deviation)
- Commit: {hash}
- Deviation: {severity} — {description}
- Files: {list of files changed}
After all tasks are completed, output:
Implementation complete: {slug}
Tasks: {completed}/{total}
Commits:
- {hash1}: {task 1 description}
- {hash2}: {task 2 description}
- ...
{If any deviations: list them here}
Next: /rpi {slug}
Or explicitly: /rpi:simplify {slug}
Update IMPLEMENT.md with a final section:
## Summary
- Total tasks: {N}
- Completed: {N}
- Blocked: {N}
- Deviations: {N} ({list severities})
- Completed: {YYYY-MM-DD}
rpi/features/{slug}/ACTIVITY.md.<decision> tags from entries belonging to the Implement phase (Sage and Forge entries from this run).rpi/features/{slug}/DECISIONS.md if it exists (to get the last decision number for sequential numbering).rpi/features/{slug}/DECISIONS.md:## Implement Phase
_Generated: {current_date}_
| # | Type | Decision | Alternatives | Rationale | Impact |
|---|------|----------|-------------|-----------|--------|
| {N} | {type} | {summary} | {alternatives} | {rationale} | {impact} |