From hoyeon
Turns vague goals into structured requirements.md via systematic interview across business/user/tech axes, extraction, and cross-check. Outputs for /blueprint in greenfield/feature/refactor/bugfix formats.
npx claudepluginhub team-attention/hoyeon --plugin hoyeonThis skill uses the workspace's default tool permissions.
Transform a vague goal into structured, traceable requirements through:
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Transform a vague goal into structured, traceable requirements through: 0. WHERE Grounding — establish project type, situation, ambition, and risk modifiers 0.5. Context Research (brownfield only) — scan existing codebase before asking the user
requirements.md committed in cli formatThe final deliverable is <spec_dir>/requirements.md in the format that /blueprint consumes:
type (greenfield|feature|refactor|bugfix), goal, non_goals[]## R-X<num>: parent requirements, each with nested #### R-X<num>.Y: sub-requirements carrying given/when/thenX in the ID is axis code: B=Business, U=Interaction (user), T=Tech## Open Decisions section with ### OD-N: blocksAll intermediate files (qa-log.md, reqs-business.md, reqs-interaction.md, reqs-tech.md) stay in <spec_dir>/ for traceability but are NOT read by /blueprint.
The WHERE is the combination of current situation and intended scope. It calibrates how deep the interview goes on each axis — without it, every project gets the same heavyweight treatment, which over-engineers toys and under-specs production systems.
Before asking for goal/non-goals, present your understanding of the user's request using this template:
**Mirror — Here's what I understood**
**Understanding:**
<1–2 sentences paraphrasing the user's request in your own words. Not a verbatim echo.>
**Goal:**
- <bullet 1: concrete outcome>
**Non-Goal (explicitly out of scope):**
- <bullet 1: exclusion — at least one must be inferred by you, not stated by user>
**Ambiguous (scope-level unknowns):**
- <ambiguity about what "done" means, what's included, or who the user is>
Then confirm via AskUserQuestion:
AskUserQuestion(
question: "Does this match your intent?",
options: [
{ label: "Approve", description: "Proceed to WHERE grounding" },
{ label: "Revise", description: "Fix goal/non-goal/scope" }
]
)
Rules:
goal and non_goals from the mirror (no need to re-ask in free-text)Use one AskUserQuestion call with 3 questions batched:
questions: [
{
question: "What kind of thing are you building?",
header: "Project type",
options: [
{ label: "User-facing app", description: "Web, mobile, or desktop app with end-user UI" },
{ label: "API / Service", description: "Backend API, data pipeline, or background service" },
{ label: "Dev tool / Library", description: "CLI tool, SDK, library, automation script" },
{ label: "Infrastructure", description: "Infra change, deployment config, platform work" }
]
},
{
question: "What's the current codebase situation?",
header: "Situation",
options: [
{ label: "Greenfield", description: "Brand new project, no existing code" },
{ label: "Brownfield extension", description: "Adding to an existing codebase, minimal changes to what's there" },
{ label: "Brownfield refactor", description: "Reworking existing code; structural changes expected" },
{ label: "Hybrid", description: "New module inside existing project, both new and integration work" }
]
},
{
question: "What's the ambition level?",
header: "Ambition",
options: [
{ label: "Toy / Experiment", description: "Days of work, personal/internal, failure acceptable" },
{ label: "Feature / MVP", description: "1-2 weeks, real users, core functionality only" },
{ label: "Product", description: "Long-term, external customers, reliability and security matter" }
]
}
]
Some projects are "small but dangerous" — a toy that handles real money, a refactor that touches a public API. Risk modifiers catch these cases by forcing relevant axes to deep regardless of Ambition.
questions: [
{
question: "Select any that apply to this project (pick none if none apply):",
header: "Risk factors",
multiSelect: true,
options: [
{ label: "Sensitive data", description: "Handles PII, payments, health, secrets, or regulated data" },
{ label: "External exposure", description: "Accessible from public internet or external customers" },
{ label: "Irreversible ops", description: "Migrations, destructive actions, public contract changes" },
{ label: "High scale", description: "High traffic, large data volumes, or strict latency targets" }
]
}
]
If the user picks none, proceed with base calibration. Otherwise, modifiers will escalate specific nodes to deep in Step 0.4.
user-dashboard)spec_dir: default .hoyeon/specs/{spec-name}/requirements.md stub with the correct frontmatter so /blueprint can read it later:
hoyeon-cli req init <spec_dir> --type <greenfield|feature|refactor|bugfix> --goal "<one-line goal>"
Map WHERE.SITUATION → --type:
greenfield → greenfieldbrownfield-extension → featurebrownfield-refactor → refactorhybrid → feature (or refactor if structural churn dominates)
The stub is overwritten at Phase 4.3 once the interview is complete. If <spec_dir>/requirements.md already exists from a prior run, skip req init and proceed (the user is re-running specify on the same spec).${baseDir}/templates/qa-log.md<spec_dir>/qa-log.md with spec name, goal, non-goals, and the WHERE context filled inCombine SITUATION × AMBITION × RISK_MODIFIERS to assign each taxonomy node a depth level (light, standard, or deep). Apply rules in order — later rules escalate, never downgrade.
Step A — SITUATION base:
Step B — AMBITION modulation:
Step C — RISK_MODIFIERS escalation (override Step B downgrades):
Examples:
Project-type notes — PROJECT_TYPE doesn't change calibration numbers, but it changes what each INTERACTION node means (the interaction-extractor reads project_type for lens selection).
Write the derived calibration into qa-log.md frontmatter as depth_calibration: so Phase 1 and the gap-auditor can read it.
Skip this phase entirely if where.situation == greenfield. Run it for brownfield-extension, brownfield-refactor, and hybrid.
Why: brownfield work depends on existing code that the user may not fully remember. Asking the user "what's the architecture?" when the codebase is right there is wasteful and unreliable. Scan the code first, then interview them on decisions — not facts.
Task(subagent_type="code-explorer",
prompt="Goal: {goal}. Find: existing patterns, modules, or files relevant to this change. Report as file:line format with brief summary.")
Task(subagent_type="code-explorer",
prompt="Find project structure and toolchain: package manifests, build/test/lint commands, entry points, deployment config. Report as file:line format.")
Task(subagent_type="docs-researcher",
prompt="Goal: {goal}. Search ADRs, READMEs, docs/, CLAUDE.md, config files for conventions, architecture decisions, and constraints relevant to this work. Report as file:line format.")
For brownfield-refactor specifically, add:
Task(subagent_type="code-explorer",
prompt="Find all call sites and dependents of {the area being refactored}. Report impact surface as file:line format.")
qa-log.md → research: sectionWrite findings into qa-log.md under a new top-level heading ## Research (before the axis sections). Include:
Also add research_done: true to the where: frontmatter block so later phases can rely on it.
During Phase 1, when asking Tech axis questions:
You are the interviewer. Ask questions one axis at a time, following this taxonomy:
Axis 1: BUSINESS — WHO, WHY, WHAT, SUCCESS, SCOPE, RISK
Axis 2: INTERACTION — JOURNEY, HAPPY, EDGE, STATE, FEEDBACK, ACCESS
Axis 3: TECH — ARCH, DATA, INFRA, DEPEND, COMPAT, SECURITY
The INTERACTION axis is consumer-generic. Reinterpret nodes based on where.project_type:
| project_type | JOURNEY | HAPPY | FEEDBACK | ACCESS |
|---|---|---|---|---|
| user-facing | User entry → outcome | Core UI flow | Visual/audio reactions | Permissions, a11y |
| api-service | Consumer integration flow | Canonical API call | HTTP responses, errors | Auth, rate limits |
| dev-tool | Install → invoke → result | --help / canonical use | stdout+exit codes | Install, platform |
| infrastructure | Operator procedure | Green deploy path | Dashboards, alerts | RBAC, IAM |
EDGE/STATE are universal: failures & conditional behavior apply everywhere.
PRIMARY: Use AskUserQuestion tool for all interview questions. Free-text prompting should only be a fallback when options genuinely cannot be enumerated.
description field show consequences/trade-offs per choicemultiSelect: true for non-exclusive choicesBatch when:
Do NOT batch when:
Max 4 per call (tool limit). Default: batch 2-3 related questions per turn.
Each AskUserQuestion option must have:
label: 1-5 words (what the user picks)description: the consequence/implication of this choiceExample (batched):
questions: [
{
question: "Who is the primary user?",
header: "Primary user",
options: [
{ label: "Senior developers", description: "Power users; expect depth + customization" },
{ label: "Junior developers", description: "Learning users; expect guidance + safe defaults" },
{ label: "Both equally", description: "Dual-mode UX; complexity to serve both" }
]
},
{
question: "What's the success signal?",
header: "Success metric",
options: [
{ label: "Team-wide adoption", description: "Qualitative; hard to measure" },
{ label: "Daily active use", description: "Quantitative DAU; needs tracking" },
{ label: "Time saved per task", description: "Efficiency metric; baseline needed" }
]
}
]
Drills happen at two distinct moments, with different judges. Both are required.
When: Immediately after an AskUserQuestion answer arrives.
How: Scan the selected option + any "Other" free-text for these signals. If present, the NEXT AskUserQuestion is a drill on the same node.
| Signal | Example answer | Drill question |
|---|---|---|
| Vague qualifier | "fast", "easy", "simple", "good UX" | AskUserQuestion with concrete thresholds as options (e.g., "<1s", "<3s", "<10s") |
| Hidden assumption | "obviously X", "of course Y" | AskUserQuestion surfacing the assumption ("does X always hold? What if not?") |
| Multiple interpretations | A term that could mean 2+ things (e.g., "admin") | AskUserQuestion listing each interpretation as an option |
| New stakeholder | Mentions a role not yet covered | Add a new node under the current axis, AskUserQuestion about their perspective |
Inline drills are fast and subjective — you catch the obvious ones on the spot.
When: After an axis ends, gap-auditor returns verdict=CONTINUE with an AMBIGUOUS list.
How: The auditor's AMBIGUOUS list tells you exactly which nodes still need drilling. Convert each AMBIGUOUS item into an AskUserQuestion targeting that specific ambiguity, then continue until gap-auditor returns SUFFICIENT.
Post-audit drills are systematic — they catch what inline judgment missed.
Type A is a fast first-pass filter; Type B is the safety net. Relying on only Type A means subjective blind spots slip through. Relying on only Type B means needlessly long axis rounds because trivially fixable ambiguities aren't caught early.
Only use free-text Q&A (no AskUserQuestion) when:
Users will sometimes not know the answer (especially on Tech axis, or when the PM doesn't know implementation details). Don't let the interview stall.
When the user's answer is "I don't know / not sure / up to you / whatever works" (either by Other free-text or by tone):
qa-log.md with status: assumption and include the reasoning in > blockquote.## Open Items in qa-log.md with:
Don't re-ask the same question. Move on. The Phase 4 Confirmation will let the user review and override any tentative judgment.
Example:
Q: What authentication method?
User: "Dunno, whatever works"
→ Tentative: "Given brownfield-extension + sensitive-data, I'll assume existing SSO integration"
→ Log as assumption with status: assumption
→ Add to Open Items: "OD: auth method (tentative: SSO based on existing system)"
→ Continue to next question
Update qa-log.md after each exchange using the template format:
#### Q: for the question, > blockquote for the answer (include the selected option label + any free-text)##### Drill: for depth follow-upsstatus: resolved | ambiguous | assumptionDispatch the gap-auditor agent at these specific moments:
Do NOT call gap-auditor after every AskUserQuestion turn — that's wasteful. Call it at boundaries.
Each call:
qa-log.md firstqa-log.md contentAll 3 axes must receive SUFFICIENT verdict, AND the final audit must also return SUFFICIENT.
Update qa-log.md frontmatter: status: complete with final coverage scores.
Run 3 agents in parallel:
${baseDir}/templates/reqs-axis.md template.hoyeon/specs/{spec-name}/reqs-business.md.hoyeon/specs/{spec-name}/reqs-interaction.md.hoyeon/specs/{spec-name}/reqs-tech.mdconfidence: low and open_questions items from extractor outputsBefore writing the final requirements.md, surface everything to the user for explicit acceptance. This prevents assumptions from silently becoming "requirements."
Show the user a concise summary grouped into:
## Final Confirmation
### Confirmed Requirements
{count by axis: Business N, Interaction N, Tech N}
### Conflicts to Resolve ({count})
- {ID pair}: {conflict description}
→ Options to resolve
### Open Questions ({count})
- {ID}: {question} (axis: {axis})
### Assumptions to Accept ({count})
- {ID}: {assumption the extractor made} — accept / reject / replace
### Out of Scope (Non-Goals)
- {items from where.non_goals}
For each CONFLICT and ASSUMPTION, use AskUserQuestion with options (typically: accept / reject / modify / defer).
For OPEN QUESTIONS: either answer them now (free-text or AskUserQuestion) or explicitly defer them to the open_decisions list.
After all conflicts and assumptions are resolved, show the full requirements list before writing to disk:
[specify] Final Requirements Preview
Type: greenfield | Goal: "<goal>"
Non-goals: <list>
## R-B1: <title>
- R-B1.1: <sub title>
given: ... | when: ... | then: ...
- R-B1.2: ...
## R-U1: <title>
- R-U1.1: ...
## R-T1: <title>
- R-T1.1: ...
Summary: {N} parent reqs, {M} sub-reqs (B:{b} U:{u} T:{t})
Open Decisions: {count or "none"}
Then ask:
AskUserQuestion(
question: "Finalize these requirements?",
options: [
{ label: "Approve", description: "Write requirements.md and finish" },
{ label: "Edit", description: "Modify specific requirements before writing" },
{ label: "Re-interview", description: "Go back to interview for missing coverage" }
]
)
If Edit: ask which requirements to change, apply edits, re-show preview. Max 3 rounds. If Re-interview: return to Phase 1 with the gap identified.
requirements.mdOnly after user has explicitly approved the preview:
${baseDir}/templates/requirements.md template (cli format)<spec_dir>/requirements.md (replacing the stub created by hoyeon-cli req init at Phase 0.3). Final shape:
---
type: greenfield | feature | refactor | bugfix
goal: "<one-line goal>"
non_goals:
- "<item>"
---
# Requirements
## R-B1: <parent title>
- behavior: <one-sentence system behavior>
#### R-B1.1: <sub title>
- given: <precondition>
- when: <trigger>
- then: <expected outcome>
#### R-B1.2: ...
## R-U1: <Interaction requirement parent>
...
## R-T1: <Tech requirement parent>
...
## Pre-work
- [ ] <action> (blocking)
- [ ] <action> (non-blocking)
## Open Decisions
### OD-1: <title>
- context: <why undecided>
- options: [<A>, <B>]
- impact: <what is blocked>
/blueprint's expectations):
## R-X<num>: at H2, where X is axis code (B=Business, U=Interaction, T=Tech)#### R-X<num>.Y: at H4 with given/when/then linestype, goal, non_goals[]. Do NOT add extra keys like spec, phase, date, total_requirements — those broke with cli's frontmatter format.(blocking) or (non-blocking). execute will gate on blocking items./blueprint <spec_dir>/All outputs go to <spec_dir>/ (default .hoyeon/specs/{spec-name}/):
| File | Phase | Description | Consumed by |
|---|---|---|---|
requirements.md | 0.3 (stub) / 4.3 (final) | Requirements in cli format (frontmatter + flat ## R-X / #### R-X.Y with GWT) | /blueprint |
qa-log.md | 1 | Full interview transcript | audit/traceability only |
reqs-business.md | 2 | Axis extraction scratch | merged into requirements.md |
reqs-interaction.md | 2 | Axis extraction scratch | merged into requirements.md |
reqs-tech.md | 2 | Axis extraction scratch | merged into requirements.md |
Only requirements.md is load-bearing for downstream skills. The other files are internal scratch/audit — /blueprint does not read them.
hoyeon-cli req init <spec_dir> --type <t> --goal "<g>" (Phase 0.3) — creates dir + requirements.md stubrequirements.md directly via Write tool.| Agent | Phase | Purpose |
|---|---|---|
gap-auditor | 1 | Interview coverage validation |
business-extractor | 2 | Business req extraction |
interaction-extractor | 2 | Interaction req extraction (project-type-aware) |
tech-extractor | 2 | Tech req extraction |