From plan-interview
Use when the user asks to stress-test, validate, critique, or find gaps and risks in an implementation plan. Does not execute the plan or apply fixes.
npx claudepluginhub shawn-sandy/agentics --plugin plan-interviewThis skill is limited to using the following tools:
Stress-test a plan through a structured conversational interview before
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides MCP server integration in Claude Code plugins via .mcp.json or plugin.json configs for stdio, SSE, HTTP types, enabling external services as tools.
Stress-test a plan through a structured conversational interview before implementation begins.
Before doing any other work, use TodoWrite to create todos for each step of
this interview. This gives the user visibility into progress and ensures no step
is skipped.
Create the following todos (all starting with status: "pending"):
Mark each todo status: "completed" as you finish that step.
Use the first match from this priority order:
.md file is
currently open or selected in the IDE (provided via context). If it exists
and its content looks like a plan (contains headings like
## Implementation, ## Plan, ## Steps, ## Instructions, or similar
structural markers), use it..claude/settings.json in the
current project directory. If a "plansDirectory" key exists, glob *.md
files from that path and use the most recently modified file. This takes
precedence over the global config.~/.claude/settings.json. If a "plansDirectory"
key exists, glob *.md files from that path and use the most recently
modified file.Glob on ~/.claude/plans/*.md, sort by
modification time, and select the most recently modified file.Once resolved, detect the review mode before proceeding:
Skill detection — the resolved file is a skill if:
SKILL.md, orname: and description: field but the
body has no plan-style headings (## Implementation, ## Plan, ## Steps,
## Context)Set mode = skill-review if detected, otherwise mode = plan-review.
Announce the file and mode:
"Interviewing plan: ~/.claude/plans/my-feature.md""Reviewing skill: path/to/SKILL.md"If no file can be found via any of these methods, tell the user and stop.
In skill-review mode: skip the plan name validation section entirely and
proceed directly to Step 2.5 after reading the file.
Read the resolved file.
Plan name validation: Before extracting plan details, check whether the plan's filename and H1 heading accurately describe the plan's content.
Extract identifiers: Get the filename (without path or .md extension)
and the H1 heading (first line matching # ...).
Determine the plan's purpose: Read enough of the plan to form a one-sentence summary of what it intends to accomplish.
Evaluate the filename against these criteria:
create-skill-reviewer-plugin, fix-marketplace-json-location. Bad:
fuzzy-swimming-pearl, hidden-popping-moonbeam.add-dark-mode-toggle is descriptive even though it contains
adjectives — the key test is whether the words relate to the plan content.plan.md, untitled.md,
draft.md, temp.md, or new-plan.md.Evaluate the H1 heading:
# Plan: Create 'skill-reviewer' Plugin. Bad: # Plan alone, or missing
entirely.)fix-auth-bug and
# Plan: Refactor Authentication Module are aligned because both concern
authentication).Record the result as one of:
# Plan: [Description] formatIf the name needs attention, present the finding immediately before continuing:
### Plan Name Review
| Element | Current | Issue | Suggested |
| ---------- | ------------------------- | ----------------------------- | ---------------------------------------- |
| Filename | `fuzzy-swimming-pearl.md` | Random — unrelated to content | `create-skill-reviewer-plugin.md` |
| H1 Heading | _(missing)_ | No H1 heading found | `# Plan: Create 'skill-reviewer' Plugin` |
Then ask the user via AskUserQuestion: "Would you like me to rename this plan
file to [suggested-name].md?" (and if the H1 heading was also flagged,
include it in the offer: "…and update the heading to
# Plan: [Description]?").
If the user confirms:
mv.Edit (if it was flagged).If the user declines, proceed without changes.
If the name passes validation, skip this section silently.
Extract the following to guide question generation:
Also extract complexity signals from the plan:
Use the scope assessment to determine how many interview rounds to conduct:
After scope assessment, also check for UI involvement: look for any of the following signals in the plan:
className, style, CSS, Tailwind, styled-components, or
similar.tsx, .jsx, .css, .scss, .htmlIf any UI signals are detected, always include Round 2 — even for plans
classified as short/focused. When triggering Round 2 on a short plan, briefly
note what was detected (e.g., "Running Round 2 — plan references React
components and .tsx files") so the user understands why.
Skip this step entirely when mode = plan-review.
Analyze the skill file to detect tool usage and recommend allowed-tools for
the skill's frontmatter.
Parse existing allowed-tools: Extract the current allowed-tools value
from the YAML frontmatter. If absent, treat as empty.
Scan for tool references: Search the skill body for any of the following known Claude tool names (match as whole words or within backticks):
Read, Write, Edit, MultiEdit, Glob, Grep, Bash, AskUserQuestion,
TodoWrite, Agent, WebFetch, WebSearch, NotebookRead, NotebookEdit
Also detect filtered patterns such as Bash(git *) or Bash(gh *).
Classify each tool as one of:
allowed-toolsallowed-toolsallowed-tools but not detected in the body
(flag for review, do not auto-remove)Present the analysis table:
### Skill Tool Analysis
| Status | Tool | Detected In |
| ---------- | ----- | ----------------------------------- |
| Declared | Read | Step 1 — reading skill file |
| Missing | Grep | Step 3 — interview codebase references |
| Undeclared | Write | In allowed-tools but not detected |
Output a suggested allowed-tools line, listing all detected tools in
alphabetical order:
**Suggested frontmatter** (commands only — `allowed-tools` is not supported
in SKILL.md files):
```yaml
allowed-tools: AskUserQuestion, Bash, Edit, Glob, Grep, Read, TodoWrite
```
> Note: `allowed-tools` is only valid in command files (`.md` files in
> `commands/`). If the reviewed skill has a paired command file, the
> recommendation applies there.
Skip this step entirely when mode = plan-review.
Read reference/skill-checklist.md and evaluate the reviewed skill against every item in the checklist. For each category, mark items as passing or failing based on what you can observe in the skill file and its directory.
Present the results as a scored table:
### Skill Quality Checklist
| Category | Passing | Failing | N/A |
| --------------- | ------- | ------- | --- |
| Core quality | 8 | 2 | 0 |
| Code & scripts | — | — | 8 |
| Testing | 0 | 3 | 1 |
Then list only the failing items with a brief note on each:
**Failing items:**
- [ ] Description does not include when to use the Skill
- [ ] No concrete examples provided
- [ ] Not tested with multiple models
Mark items as N/A (not applicable) when the skill has no scripts or code (entire "Code and scripts" category may be N/A for instruction-only skills).
Generate questions dynamically from the plan content — do not use generic or
hardcoded questions. Each AskUserQuestion call may include up to 4 questions.
Round 1 — Technical & Trade-offs (always run):
Ask up to 4 questions covering:
Use multiSelect: true for questions where the user may want to flag multiple
concerns (e.g., "Which of these areas need more investigation?").
Round 2a — UI/UX & Flows (run for medium and complex plans, or any plan with UI involvement — see Step 2):
Ask up to 4 questions covering:
prefers-reduced-motion, transitions, focus indicators
after animationRound 2b — Accessibility & Semantic Structure (run immediately after Round 2a when Round 2 is triggered):
Ask up to 4 questions covering:
aria-describedby for errors, live
regionsRound 3 — Edge Cases & Best Practices (run for complex plans only):
Ask up to 4 questions covering:
Deep Grill: To walk every decision branch in depth after this interview, run the standalone
deep-grillskill. Say "deep grill this plan" or invoke it directly with/plan-interview:deep-grill [plan-file-path].
After the structured rounds, review the full plan one more time and identify any issues that were not covered by the interview questions. These are concerns you observed independently — not topics already raised by the user. Look for:
If any out-of-scope concerns exist, present them as a clearly labelled section in the chat before the summary:
### Additional Concerns (Outside Structured Rounds)
- [Concern 1]: [Brief explanation of why this matters]
- [Concern 2]: [Brief explanation of why this matters]
If no additional concerns exist, skip this section silently.
Complexity Check (always run):
After the out-of-scope scan, evaluate the proposed approach against what the simplest working solution would look like. For each element that appears over-engineered, ask: Could a built-in, a single function, or a native API replace this abstraction? Only surface real issues — do not flag complexity for its own sake on genuinely complex plans. Only name a simpler alternative when one is clearly apparent; omit concerns where no obvious alternative exists.
If any complexity concerns are found, present them under a clearly labelled section:
### Complexity Concerns
- [Over-engineered element]: [Why it's unnecessary] — Simpler alternative:
[specific suggestion]
Skip this section silently if no complexity concerns are found.
After all rounds and the out-of-scope check are complete, output a structured summary in the chat:
## Plan Interview Summary
### Key Decisions Confirmed
[List decisions the user confirmed or clarified during the interview]
### Plan Naming
[Include only if name validation found issues in Step 2. Reproduce the table
showing current name(s), the issue, and suggested replacement(s). Note whether
the user accepted or declined the rename offer. Omit this section entirely if
the name passed validation.]
### Open Risks & Concerns
[List risks, unknowns, or concerns surfaced — with brief context]
### Recommended Next Steps
[Amendments to the plan, additional spikes, or clarifications needed before
implementation]
### Simplification Opportunities
[Concise list of areas where the plan can be reduced in scope or abstraction,
with specific simpler alternatives — omit this section if no complexity concerns
were found]
### Skill Quality Checklist Results
[Include only in `skill-review` mode. Reproduce the scored table and failing
items from Step 2.6. Omit this section entirely when reviewing a plan file.]
### Allowed Tools Recommendation
[Include only in `skill-review` mode. Reproduce the tool analysis table from
Step 2.5, plus the suggested `allowed-tools` line for any paired command file.
Omit this section entirely when reviewing a plan file.]
After presenting the Step 5 summary, ask the user:
"Would you like me to update the plan with suggested changes and append this interview summary to the plan file?"
Do not write to the plan file unless the user explicitly confirms. If they
confirm, update the plan with suggested changes and append the summary as a new
## Interview Summary section at the end of the plan file using the Edit
tool. If they decline, do not modify the file.
In skill-review mode: if Step 2.5 identified missing tools, also ask:
"Would you like me to apply the
allowed-toolsrecommendation to the paired command file?"
If confirmed, use Edit to add or update the allowed-tools line in the YAML
frontmatter of the corresponding command file (look for a .md file in
commands/ with a matching name). If no paired command file exists, note that
allowed-tools is only applicable to command files and skip.