Interview developer, generate specs with Mestre/Clara/Pixel, then adversarial review with Nexus.
From rpi-kitnpx claudepluginhub dmend3z/rpi-kit --plugin rpi-kit<feature-name> [--force]rpi//planRestates requirements, assesses risks and dependencies, generates phased step-by-step implementation plan with complexity estimates, and waits for user confirmation before coding.
/planStarts Manus-style file-based planning: creates task_plan.md, findings.md, progress.md if missing, invokes planning skill, and guides through workflow.
/planLaunches interactive 7-step wizard to build Scope, Metrics, Direction, and Verification from a goal description.
/planRestates requirements, assesses risks and dependencies, creates phased step-by-step implementation plan with complexity estimates, and waits for user confirmation before coding.
/planBreaks project into small verifiable tasks with acceptance criteria, dependencies, checkpoints. Reads spec/codebase, presents plan for review, saves to tasks/plan.md and tasks/todo.md.
/planCaptures user intent via 5 structured questions, creates strategic execution plan, saves to .claude/session-plan.md and session-intent.md for review.
Nexus interviews the developer, then Mestre (architecture), Clara (product), and Pixel (UX, conditional) generate specs informed by the interview. Nexus performs adversarial review, surfacing contradictions for developer resolution.
.rpi.yaml for config. Apply defaults if missing:
folder: rpi/featuresspecs_dir: rpi/specscontext_file: rpi/context.mdux_agent: auto$ARGUMENTS to extract {slug} and optional --force flag.rpi/features/{slug}/research/RESEARCH.md exists. If not:
RESEARCH.md not found for '{slug}'. Run /rpi:research {slug} first.
Stop.rpi/features/{slug}/research/RESEARCH.md.## Verdict section.NO-GO and --force was NOT passed:
Research verdict is NO-GO for '{slug}'.
Review RESEARCH.md for details and alternatives.
To override: /rpi:plan {slug} --force
Stop.--force was passed: proceed despite NO-GO verdict.rpi/features/{slug}/plan/PLAN.md already exists.--force was NOT passed:
--force was passed or user confirms: proceed (will overwrite).rpi/features/{slug}/REQUEST.md — store as $REQUEST.rpi/features/{slug}/research/RESEARCH.md — store as $RESEARCH.rpi/features/{slug}/DESIGN.md if it exists — store as $DESIGN.rpi/context.md (project context) if it exists — store as $CONTEXT.rpi/specs/ for specs relevant to the feature — store as $RELEVANT_SPECS.Check the project root for frontend framework config files:
next.config.* or next.config.ts → Next.jsvite.config.* → Vite (React/Vue/Svelte)angular.json → Angularsvelte.config.* → Svelte/SvelteKitnuxt.config.* → Nuxtpackage.json containing react, vue, angular, or svelte in dependenciesSet $HAS_FRONTEND to true if any of these are detected.
Read ux_agent from .rpi.yaml:
always: set $RUN_PIXEL to true regardless of frontend detection.never: set $RUN_PIXEL to false regardless.auto (default): set $RUN_PIXEL to $HAS_FRONTEND.Analyze $REQUEST and $RESEARCH to determine interview depth.
| Complexity | Files affected | Layers | Interview depth |
|---|---|---|---|
| S | 1-3 | single | 3-4 questions |
| M | 4-8 | 1-2 | 4-5 questions |
| L | 9-15 | multiple | 5-6 questions |
| XL | 16+ | cross-cutting | 6-8 questions |
$COMPLEXITY and $INTERVIEW_DEPTH.Complexity: {$COMPLEXITY} — Interview depth: {$INTERVIEW_DEPTH} questions
Launch Nexus agent to interview the developer before spec generation:
You are Nexus. You are interviewing the developer about feature: {slug}
before the planning agents (Mestre, Clara, Pixel) generate their specs.
Your goal: surface decisions, constraints, and preferences that will
shape the plan. You are a FACILITATOR — you don't make decisions,
you help the developer make informed ones.
## Context
### REQUEST.md
{$REQUEST}
### RESEARCH.md
{$RESEARCH}
### DESIGN.md
{$DESIGN}
### Project Context
{$CONTEXT}
### Complexity Assessment
Complexity: {$COMPLEXITY}
Interview depth: {$INTERVIEW_DEPTH} questions
## Interview Protocol
### Phase 1: Analyze Context (internal, no output)
1. Read REQUEST.md and identify:
- Ambiguous requirements (multiple valid interpretations)
- Unstated assumptions
- Missing technical decisions
2. Read RESEARCH.md and identify:
- Open questions flagged by Atlas/Scout
- Risks without clear mitigations
- Alternative approaches not yet chosen
- Contradictions between research findings
3. Prioritize: rank discovered gaps by impact on plan quality
4. Select top {$INTERVIEW_DEPTH} questions across categories
### Phase 2: Interview (interactive)
Ask questions ONE AT A TIME using AskUserQuestion tool.
Rules:
- Each question MUST reference specific content from REQUEST or RESEARCH
- Provide 2-4 concrete options when possible (not vague open-ended)
- Include your recommendation as first option with "(Recommended)"
- After each answer, acknowledge briefly and ask the next question
- If an answer reveals NEW ambiguity, add a follow-up (within limit)
- Categories to cover (pick based on what's most impactful):
TECHNICAL APPROACH (at least 1 question):
- Architecture pattern choice
- Technology/library selection
- Integration strategy
- Error handling philosophy
SCOPE BOUNDARIES (at least 1 question):
- Must-have vs nice-to-have features
- Edge cases: in or out?
- MVP definition
TRADE-OFFS (if complexity >= L):
- Speed vs quality
- Simplicity vs flexibility
- Convention vs optimal
RISKS & CONSTRAINTS (if RESEARCH flags risks):
- Risk mitigation preference
- Deadline/dependency impacts
- Performance requirements
### Phase 3: Compile
After all questions answered, compile the interview results using your
[Nexus — Developer Interview] output format.
Return the compiled interview content.
After the interview, append your activity to rpi/features/{slug}/ACTIVITY.md:
### {current_date} — Nexus (Plan Interview)
- **Action:** Developer interview for {slug}
- **Key decisions:** {for each <decision> tag you emitted: "summary (rationale)", separated by semicolons. If none: "No decisions in this phase."}
- **Questions asked:** {count}
- **Quality:** {your quality gate result}
Store the output as $INTERVIEW.
rpi/features/{slug}/plan/rpi/features/{slug}/plan/INTERVIEW.md with $INTERVIEW content, using this format:# Interview: {Feature Name}
Date: {current date}
Complexity: {$COMPLEXITY}
Questions: {N asked} / {$INTERVIEW_DEPTH planned}
{$INTERVIEW content organized by category:
- Technical Decisions (Q&A pairs with impact notes)
- Scope Boundaries (Q&A pairs with impact notes)
- Trade-offs (Q&A pairs with impact notes)
- Key Constraints Identified
- Open Items (flagged for agents)}
## Resolved Contradictions
(Populated by Step 14-15)
Interview saved: rpi/features/{slug}/plan/INTERVIEW.md ({N} questions)
Launch Mestre agent with this prompt:
You are Mestre. Generate the engineering specification for feature: {slug}
## Request
{$REQUEST}
## Research
{$RESEARCH}
## Design Context
{$DESIGN}
## Project Context
{$CONTEXT}
## Relevant Specs
{$RELEVANT_SPECS}
## Developer Interview
{$INTERVIEW}
IMPORTANT: Your output MUST align with the developer's stated preferences
in the interview. If the developer chose approach X, use approach X.
If they marked something as out-of-scope, exclude it.
If an item is listed under "Open Items", use your best judgment but note your assumption.
Your task:
1. Read the request and research findings carefully
2. Make technical decisions: approach, architecture, patterns to follow
3. Identify files to create, modify, and remove
4. List architectural risks with mitigations
5. Output using your eng.md format: [Mestre -- Engineering Specification]
Be pragmatic. Follow existing codebase patterns from context.md and research findings. No over-engineering.
After generating eng.md, append your activity to rpi/features/{slug}/ACTIVITY.md:
### {current_date} — Mestre (Plan — eng.md)
- **Action:** Engineering specification for {slug}
- **Key decisions:** {for each <decision> tag you emitted: "summary (rationale)", separated by semicolons. If none: "No decisions in this phase."}
- **Architecture decisions:** {count}
- **Files planned:** {count create + modify}
- **Quality:** {your quality gate result}
Store the output as $ENG_OUTPUT.
Launch Clara agent with this prompt:
You are Clara. Generate the product specification for feature: {slug}
## Request
{$REQUEST}
## Research
{$RESEARCH}
## Design Context
{$DESIGN}
## Project Context
{$CONTEXT}
## Developer Interview
{$INTERVIEW}
IMPORTANT: Your output MUST align with the developer's stated preferences
in the interview. If the developer chose approach X, use approach X.
If they marked something as out-of-scope, exclude it.
If an item is listed under "Open Items", use your best judgment but note your assumption.
Your task:
1. Define user stories with concrete acceptance criteria (Given/When/Then)
2. Classify requirements: must-have, nice-to-have, out-of-scope
3. Cut anything that doesn't map to the core problem in REQUEST.md
4. Define success metrics
5. Output using your pm.md format: [Clara -- Product Specification]
Be ruthless with scope. Every requirement must have acceptance criteria.
After generating pm.md, append your activity to rpi/features/{slug}/ACTIVITY.md:
### {current_date} — Clara (Plan — pm.md)
- **Action:** Product specification for {slug}
- **Key decisions:** {for each <decision> tag you emitted: "summary (rationale)", separated by semicolons. If none: "No decisions in this phase."}
- **User stories:** {count}
- **Acceptance criteria:** {count}
- **Scope cuts:** {count of out-of-scope items}
- **Quality:** {your quality gate result}
Store the output as $PM_OUTPUT.
Only if $RUN_PIXEL is true:
Launch Pixel agent with this prompt:
You are Pixel. Generate the UX specification for feature: {slug}
## Request
{$REQUEST}
## Research
{$RESEARCH}
## Design Context
{$DESIGN}
## Project Context
{$CONTEXT}
## Engineering Specification
{$ENG_OUTPUT}
## Developer Interview
{$INTERVIEW}
IMPORTANT: Your output MUST align with the developer's stated preferences
in the interview. If the developer chose approach X, use approach X.
If they marked something as out-of-scope, exclude it.
If an item is listed under "Open Items", use your best judgment but note your assumption.
Your task:
1. Map the complete user flow from entry to completion
2. Define all states: empty, loading, error, success, edge cases
3. Identify accessibility requirements
4. Consider responsive behavior
5. Output using your ux.md format: [Pixel -- UX Specification]
Think from the user's perspective. If a flow needs a tooltip, the design failed.
Store the output as $UX_OUTPUT.
If $RUN_PIXEL is false: set $UX_OUTPUT to "No UX specification — no frontend detected.".
Launch Mestre agent to synthesize all specs into a concrete plan:
You are Mestre. Generate the implementation plan (PLAN.md) for feature: {slug}
## Engineering Specification
{$ENG_OUTPUT}
## Product Specification
{$PM_OUTPUT}
## UX Specification
{$UX_OUTPUT}
## Request
{$REQUEST}
## Research
{$RESEARCH}
## Design Context
{$DESIGN}
## Project Context
{$CONTEXT}
## Developer Interview
{$INTERVIEW}
IMPORTANT: Your output MUST align with the developer's stated preferences
in the interview. If the developer chose approach X, use approach X.
If they marked something as out-of-scope, exclude it.
If an item is listed under "Open Items", use your best judgment but note your assumption.
Your task:
1. Read all specifications and synthesize into numbered tasks
2. Each task must have: effort estimate, file list, dependencies, test criteria
3. Tasks must be small enough for one commit each
4. Group tasks into phases where logical
5. Include metadata: total tasks, total files, overall complexity
6. Output using your PLAN.md format: [Mestre -- Implementation Plan]
Rules:
- Tasks are numbered (1.1, 1.2, 2.1, etc.)
- Every task lists exact files it touches
- Dependencies reference task IDs
- If Clara marked something as out-of-scope, don't create tasks for it
- If the developer interview decided on approach X, all tasks must use approach X
- If the developer marked something as out-of-scope, don't create tasks for it
After generating PLAN.md, append your activity to rpi/features/{slug}/ACTIVITY.md:
### {current_date} — Mestre (Plan — PLAN.md)
- **Action:** Implementation plan for {slug}
- **Key decisions:** {for each <decision> tag you emitted: "summary (rationale)", separated by semicolons. If none: "No decisions in this phase."}
- **Tasks:** {count}
- **Complexity:** {S|M|L|XL}
- **Quality:** {your quality gate result}
Store the output as $PLAN_OUTPUT.
Launch Mestre agent to create delta specifications:
You are Mestre. Generate delta specs for feature: {slug}
## Implementation Plan
{$PLAN_OUTPUT}
## Engineering Specification
{$ENG_OUTPUT}
## Relevant Current Specs
{$RELEVANT_SPECS}
## Developer Interview
{$INTERVIEW}
IMPORTANT: Your output MUST align with the developer's stated preferences
in the interview. If the developer chose approach X, use approach X.
If they marked something as out-of-scope, exclude it.
If an item is listed under "Open Items", use your best judgment but note your assumption.
Your task:
1. Based on the plan, determine what specs need to change
2. For each new system component: create a spec in delta/ADDED/
3. For each existing spec that changes: create the updated version in delta/MODIFIED/
4. For any spec that becomes obsolete: create a marker in delta/REMOVED/
5. Delta specs capture ONLY what changes — not the entire system
Output the list of delta specs you will create, with their paths:
- delta/ADDED/{name}.md — {description}
- delta/MODIFIED/{name}.md — {description}
- delta/REMOVED/{name}.md — {description}
Then write each spec file.
Launch Nexus agent to perform adversarial review of all plan artifacts:
You are Nexus. You are performing ADVERSARIAL REVIEW of the plan
artifacts for feature: {slug}
Your mandate: You MUST find problems. "Looks good" is NOT acceptable.
If you cannot find real issues, you must document WHY the plan is
unusually solid — but never rubber-stamp.
## Artifacts to Review
### Engineering Specification (Mestre)
{$ENG_OUTPUT}
### Product Specification (Clara)
{$PM_OUTPUT}
### UX Specification (Pixel)
{$UX_OUTPUT}
### Implementation Plan (Mestre)
{$PLAN_OUTPUT}
### Developer Interview
{$INTERVIEW}
### Original Request
{$REQUEST}
### Research Findings
{$RESEARCH}
## Adversarial Analysis Protocol
### Pass 1: Cross-Artifact Contradictions
Check every pair of artifacts for conflicts:
- eng.md vs pm.md: Do technical decisions satisfy all acceptance criteria?
- eng.md vs ux.md: Does the architecture support all UI states/flows?
- pm.md vs PLAN.md: Does every must-have requirement have tasks?
- pm.md scope vs PLAN.md tasks: Are out-of-scope items sneaking in?
- PLAN.md vs INTERVIEW.md: Do tasks reflect developer's stated preferences?
### Pass 2: Assumption Challenges
For each major decision in eng.md, ask:
- "What if this assumption is wrong?"
- "What's the blast radius if this fails?"
- "Is there a simpler approach nobody considered?"
### Pass 3: Coverage Gaps
- Requirements without tasks
- Tasks without test criteria
- Files mentioned but not in any task
- UI states without error handling
- Happy path only (missing edge cases)
### Pass 4: Hidden Complexity
- Tasks estimated as S that touch >3 files
- Dependencies that create serial bottlenecks
- Integration points without error handling
- Data migrations without rollback plan
### Pass 5: REQUEST Drift
- Compare final PLAN.md against original REQUEST.md
- Has scope crept? Has the core problem shifted?
- Would the developer recognize this as what they asked for?
## Output Format
For each issue found, output using your [Nexus — Adversarial Review] format.
## Developer Resolution Protocol
After completing all passes:
1. Count issues by severity
2. CRITICAL issues: present one at a time via AskUserQuestion with suggested resolutions as options
3. HIGH issues: present as batch via AskUserQuestion, let developer pick which to address
4. MEDIUM/LOW issues: present summary, developer can dismiss or address
5. For each resolved issue: note the chosen resolution and which artifacts need patching
6. Return the full adversarial review with all resolutions noted
After adversarial review, append your activity to rpi/features/{slug}/ACTIVITY.md:
### {current_date} — Nexus (Plan Adversarial Review)
- **Action:** Adversarial review for {slug}
- **Key decisions:** {for each <decision> tag you emitted: "summary (rationale)", separated by semicolons. If none: "No decisions in this phase."}
- **Issues found:** {count by severity}
- **Contradictions resolved:** {count}
- **Coherence status:** {PASS|PASS with notes|NEEDS re-plan}
- **Quality:** {your quality gate result}
Store the output as $ADVERSARIAL_REVIEW.
If Nexus found CRITICAL issues that the developer could not resolve:
Adversarial review found unresolvable issues. Consider re-running:
/rpi:plan {slug} --force
Stop.
If $ADVERSARIAL_REVIEW contains resolved issues:
$ADVERSARIAL_REVIEW:
$ENG_OUTPUT, $PM_OUTPUT, $UX_OUTPUT, or $PLAN_OUTPUT as needed<!-- Patched: {issue title} — {resolution chosen} --> as comment near the change$INTERVIEW content: append resolved contradictions to the ## Resolved Contradictions section:
### C{N}: {issue title}
**Severity:** {severity}
**Resolution:** {developer's chosen option}
**Artifacts patched:** {list of affected artifacts and sections}
rpi/features/{slug}/plan/INTERVIEW.md with the patched version of $INTERVIEW.rpi/features/{slug}/plan/rpi/features/{slug}/plan/INTERVIEW.md was already written in Step 8 and updated in Step 15.rpi/features/{slug}/plan/eng.md with $ENG_OUTPUTrpi/features/{slug}/plan/pm.md with $PM_OUTPUT$RUN_PIXEL is true: write rpi/features/{slug}/plan/ux.md with $UX_OUTPUTrpi/features/{slug}/plan/PLAN.md with $PLAN_OUTPUTmkdir -p rpi/features/{slug}/delta/ADDED
mkdir -p rpi/features/{slug}/delta/MODIFIED
mkdir -p rpi/features/{slug}/delta/REMOVED
rpi/features/{slug}/ACTIVITY.md.<decision> tags from entries belonging to the Plan phase (Nexus interview, Mestre eng.md, Clara, Mestre PLAN.md, Nexus adversarial entries).rpi/features/{slug}/DECISIONS.md if it exists (to get the last decision number for sequential numbering).rpi/features/{slug}/DECISIONS.md:## Plan Phase
_Generated: {current_date}_
| # | Type | Decision | Alternatives | Rationale | Impact |
|---|------|----------|-------------|-----------|--------|
| {N} | {type} | {summary} | {alternatives} | {rationale} | {impact} |
Plan complete: rpi/features/{slug}/plan/
Artifacts:
- plan/INTERVIEW.md (Nexus — developer interview)
- plan/eng.md (Mestre — engineering spec)
- plan/pm.md (Clara — product spec)
- plan/ux.md (Pixel — UX spec) ← only if frontend
- plan/PLAN.md (Mestre — implementation tasks)
- delta/ADDED/ ({N} new specs)
- delta/MODIFIED/ ({N} updated specs)
- delta/REMOVED/ ({N} removed specs)
Tasks: {N} | Files: {N} | Complexity: {$COMPLEXITY}
Interview: {N} questions asked, {N} contradictions resolved
Coherence: {Nexus adversarial verdict}
Next: /rpi {slug}
Or explicitly: /rpi:implement {slug}