From ralph-specum
Runs 3-phase interactive interviews for requirements gathering, clarification, and brainstorming before specs or Ralph phases using adaptive questions with recommendations.
npx claudepluginhub tzachbon/smart-ralph --plugin ralph-specumThis skill uses the workspace's default tool permissions.
Adaptive brainstorming dialogue algorithm for all spec phases. Each phase command provides its own exploration territory (phase-specific areas to probe).
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides MCP server integration in Claude Code plugins via .mcp.json or plugin.json configs for stdio, SSE, HTTP types, enabling external services as tools.
Adaptive brainstorming dialogue algorithm for all spec phases. Each phase command provides its own exploration territory (phase-specific areas to probe).
Each question must have 2-4 options (max 4). Keep the most relevant options, combine similar ones.
Every question asked via AskUserQuestion in Phase 1 leads with the recommended option (except when options are symmetric, in which case [Recommended] may be omitted):
AskUserQuestion:
question: "[Context-aware question referencing prior answers]. [One sentence rationale for the recommendation.]"
options:
- "[Recommended] [Option text -- the AI's suggested answer]"
- "[Alternative 1]"
- "[Alternative 2 if needed]"
- "Other"
Rules:
[Recommended] is a label prefix on the first option only.[Recommended] label rather than placing it arbitrarily.Example:
AskUserQuestion:
question: "Where should the spec live? You only have one specs directory configured, so the default is fine unless you want to reorganize."
options:
- "[Recommended] ./specs/ (default)"
- "Let me configure a different path"
- "Other"
Before asking any question, determine whether the answer is a codebase fact or a user decision:
Only ask what you cannot discover yourself.
After each response, check for early completion signals using token-based matching:
completionSignals = ["done", "proceed", "skip", "enough", "that's all", "continue", "next"]
tokens = tokenize(userResponse.lower()) # split on whitespace/punctuation
for signal in completionSignals:
if signal in tokens: # exact token match, not substring
-> SKIP remaining questions, move to PROPOSE APPROACHES
Read all available context (.progress.md, prior artifacts, goal text). Build a question tree from the exploration territory with dependency ordering. Traverse the tree: auto-resolve codebase facts via exploration, ask user only about decisions. Each question leads with [Recommended] answer. No fixed question caps. Exit when all nodes resolved or user signals completion.
See references/algorithm.md for full pseudocode.
Synthesize dialogue into 2-3 distinct approaches. Each includes: name, description, trade-offs. Lead with recommendation. Present via AskUserQuestion. Maximum 3 approaches (more causes decision fatigue). Trade-offs must be honest. No straw-man alternatives.
See references/algorithm.md for full pseudocode.
Brief recap to user of key decisions and chosen approach. If user corrects something, update before storing. Store in .progress.md under Context Accumulator pattern.
See references/algorithm.md for full pseudocode.
When user selects "Other": ask a context-specific follow-up (never generic "elaborate"). Reference what the user typed. Continue until clarity or 5 rounds. Do not increment askedCount for follow-ups.
See references/examples.md for example follow-up patterns.
After each interview, update .progress.md: read existing content, append new section under "## Interview Responses" with descriptive keys reflecting what was discussed. Include the chosen approach.
See references/examples.md for storage format.
references/algorithm.md -- Full 3-phase pseudocode (UNDERSTAND decision-tree, PROPOSE APPROACHES, CONFIRM & STORE)references/examples.md -- Example interview questions, "Other" response handling, context storage format