From plan-interview
Stress-test a plan through a structured conversational interview before implementation begins.
npx claudepluginhub shawn-sandy/agentics --plugin plan-interview# /plan-interview:plan-interview Stress-test a plan through a structured conversational interview before implementation begins. ## Usage ## Instructions ### Step 0 — Create progress todos Before doing any other work, use `TodoWrite` to create todos for each step of this interview. This gives the user visibility into progress and ensures no step is skipped. Create the following todos (all starting with `status: "pending"`): - Step 2: Read, validate plan name, and analyze the plan - Step 2.5: Skill tool analysis (skill-review mode only) - Step 3a: Round 1 — Technical & Trade-offs - S...
/plan-interviewRead the supplied file and interview me in detail using the AskUserQuestionTool about literally
Stress-test a plan through a structured conversational interview before implementation begins.
/plan-interview:plan-interview # auto-detects from IDE or latest in ~/.claude/plans/
/plan-interview:plan-interview ~/.claude/plans/my-feature.md # use a specific plan file
Before doing any other work, use TodoWrite to create todos for each step of this interview. This gives the user visibility into progress and ensures no step is skipped.
Create the following todos (all starting with status: "pending"):
Mark each todo status: "completed" as you finish that step.
Use the first match from this priority order:
$ARGUMENTS is provided, treat it as the file path and read it directly..md file, and its content looks like a plan (contains headings like ## Implementation, ## Plan, ## Steps, ## Instructions, or similar structural markers), use it..claude/settings.json in the current project directory. If a "plansDirectory" key exists, glob *.md files from that path and use the most recently modified file. This takes precedence over the global config in step 4.~/.claude/plans/: If none of the above applies, use Glob on ~/.claude/plans/*.md, sort by modification time, and select the most recently modified file.Once resolved, detect the review mode before proceeding:
Skill detection — the resolved file is a skill if:
SKILL.md, orname: and description: field but the
body has no plan-style headings (## Implementation, ## Plan, ## Steps,
## Context)Set mode = skill-review if detected, otherwise mode = plan-review.
Announce the file and mode:
"Interviewing plan: ~/.claude/plans/my-feature.md""Reviewing skill: path/to/SKILL.md"If no file can be found via any of these methods, tell the user and stop.
In skill-review mode: skip the plan name validation section entirely and
proceed directly to Step 2.5 after reading the file.
Read the resolved file.
Plan name validation: Before extracting plan details, check whether the plan's filename and H1 heading accurately describe the plan's content.
Extract identifiers: Get the filename (without path or .md extension)
and the H1 heading (first line matching # ...).
Determine the plan's purpose: Read enough of the plan to form a one-sentence summary of what it intends to accomplish.
Evaluate the filename against these criteria:
create-skill-reviewer-plugin, fix-marketplace-json-location.
Bad: fuzzy-swimming-pearl, hidden-popping-moonbeam.add-dark-mode-toggle is descriptive even though it contains
adjectives — the key test is whether the words relate to the plan content.plan.md, untitled.md,
draft.md, temp.md, or new-plan.md.Evaluate the H1 heading:
# Plan: Create 'skill-reviewer' Plugin. Bad: # Plan alone, or missing entirely.)fix-auth-bug and
# Plan: Refactor Authentication Module are aligned because both concern
authentication).Record the result as one of:
# Plan: [Description] formatIf the name needs attention, present the finding immediately before continuing:
### Plan Name Review
| Element | Current | Issue | Suggested |
|---------|---------|-------|-----------|
| Filename | `fuzzy-swimming-pearl.md` | Random — unrelated to content | `create-skill-reviewer-plugin.md` |
| H1 Heading | _(missing)_ | No H1 heading found | `# Plan: Create 'skill-reviewer' Plugin` |
Then ask the user via AskUserQuestion: "Would you like me to rename this plan
file to [suggested-name].md?" (and if the H1 heading was also flagged,
include it in the offer: "…and update the heading to # Plan: [Description]?").
If the user confirms:
mv.Edit (if it was flagged).If the user declines, proceed without changes.
If the name passes validation, skip this section silently.
Extract the following to guide question generation:
Also extract complexity signals from the plan:
Use the scope assessment to determine how many interview rounds to conduct:
After scope assessment, also check for UI involvement: look for any of the following signals in the plan:
className, style, CSS, Tailwind, styled-components, or similar.tsx, .jsx, .css, .scss, .htmlIf any UI signals are detected, always include Round 2 — even for plans classified as short/focused. When triggering Round 2 on a short plan, briefly note what was detected (e.g., "Running Round 2 — plan references React components and .tsx files") so the user understands why.
Skip this step entirely when mode = plan-review.
Analyze the skill file to detect tool usage and recommend allowed-tools for
any paired command file.
Parse existing allowed-tools: Extract the current allowed-tools value
from the YAML frontmatter. If absent, treat as empty.
Scan for tool references: Search the skill body for any of the following known Claude tool names (match as whole words or within backticks):
Read, Write, Edit, MultiEdit, Glob, Grep, Bash, AskUserQuestion,
TodoWrite, Agent, WebFetch, WebSearch, NotebookRead, NotebookEdit
Also detect filtered patterns such as Bash(git *) or Bash(gh *).
Classify each tool as one of:
allowed-toolsallowed-toolsallowed-tools but not detected in the body
(flag for review, do not auto-remove)Present the analysis table:
### Skill Tool Analysis
| Status | Tool | Detected In |
|------------|----------------|---------------------------------------|
| Declared | Read | Step 1 — reading skill file |
| Missing | Grep | Step 4.5 — deep grill codebase search |
| Undeclared | Write | In allowed-tools but not detected |
Output a suggested allowed-tools line, listing all detected tools in
alphabetical order:
**Suggested frontmatter** (applies to paired command file, not SKILL.md):
```yaml
allowed-tools: AskUserQuestion, Bash, Edit, Glob, Grep, Read, TodoWrite
> Note: `allowed-tools` is only valid in command files (`.md` files in
> `commands/`). If the reviewed skill has a paired command file, the
> recommendation applies there.
Generate questions dynamically from the plan content — do not use generic or hardcoded questions. Each AskUserQuestion call may include up to 4 questions.
Round 1 — Technical & Trade-offs (always run):
Ask up to 4 questions covering:
Use multiSelect: true for questions where the user may want to flag multiple concerns (e.g., "Which of these areas need more investigation?").
Round 2a — UI/UX & Flows (run for medium and complex plans, or any plan with UI involvement — see Step 2):
Ask up to 4 questions covering:
prefers-reduced-motion, transitions, focus indicators after animationRound 2b — Accessibility & Semantic Structure (run immediately after Round 2a when Round 2 is triggered):
Ask up to 4 questions covering:
aria-describedby for errors, live regionsRound 3 — Edge Cases & Best Practices (run for complex plans only):
Ask up to 4 questions covering:
After the structured rounds, review the full plan one more time and identify any issues that were not covered by the interview questions. These are concerns you observed independently — not topics already raised by the user. Look for:
If any out-of-scope concerns exist, present them as a clearly labelled section in the chat before the summary:
### Additional Concerns (Outside Structured Rounds)
- [Concern 1]: [Brief explanation of why this matters]
- [Concern 2]: [Brief explanation of why this matters]
If no additional concerns exist, skip this section silently.
Complexity Check (always run):
After the out-of-scope scan, evaluate the proposed approach against what the simplest working solution would look like. For each element that appears over-engineered, ask: Could a built-in, a single function, or a native API replace this abstraction? Only surface real issues — do not flag complexity for its own sake on genuinely complex plans. Only name a simpler alternative when one is clearly apparent; omit concerns where no obvious alternative exists.
If any complexity concerns are found, present them under a clearly labelled section:
### Complexity Concerns
- [Over-engineered element]: [Why it's unnecessary] — Simpler alternative: [specific suggestion]
Skip this section silently if no complexity concerns are found.
After all rounds and the out-of-scope check are complete, output a structured summary in the chat:
## Plan Interview Summary
### Key Decisions Confirmed
[List decisions the user confirmed or clarified during the interview]
### Plan Naming
[Include only if name validation found issues in Step 2. Reproduce the table
showing current name(s), the issue, and suggested replacement(s). Note whether
the user accepted or declined the rename offer. Omit this section entirely if
the name passed validation.]
### Open Risks & Concerns
[List risks, unknowns, or concerns surfaced — with brief context]
### Recommended Next Steps
[Amendments to the plan, additional spikes, or clarifications needed before implementation]
### Simplification Opportunities
[Concise list of areas where the plan can be reduced in scope or abstraction, with specific simpler alternatives — omit this section if no complexity concerns were found]
### Allowed Tools Recommendation
[Include only in `skill-review` mode. Reproduce the tool analysis table from
Step 2.5, plus the suggested `allowed-tools` line for any paired command file.
Omit this section entirely when reviewing a plan file.]
After presenting the summary, ask the user:
"Would you like me to append this interview summary to the plan file?"
Do not write to the plan file unless the user explicitly confirms. If they confirm, append the summary as a new ## Interview Summary section at the end of the plan file using the Edit tool.
In skill-review mode: if Step 2.5 identified missing tools, also ask:
"Would you like me to apply the
allowed-toolsrecommendation to the paired command file?"
If confirmed, use Edit to add or update the allowed-tools line in the YAML
frontmatter of the corresponding command file (look for a .md file in
commands/ with a matching name). If no paired command file exists, note that
allowed-tools is only applicable to command files and skip.
Arguments: $ARGUMENTS