From agent-teams
Conducts short adaptive interview (2-6 questions) to clarify feature intent and business context, then launches /team-feature with compiled brief. Use for vague requests or explicit pre-build discussions.
npx claudepluginhub izmailovilya/ilia-izmailov-plugins --plugin agent-teamsThis skill uses the workspace's default tool permissions.
Conduct a short adaptive interview to understand the user's intent, then compile a brief and hand off to `/team-feature`.
Conducts structured requirements-gathering interviews for software features/systems. Reads SPEC.md and context, asks numbered questions on purpose, technical design, UI/UX, edge cases, security, rollout. For deep requirement elicitation before specs/plans.
Guides feature design via codebase context exploration and structured questions on goals, users, scope, and timelines. Use for planning new features or architecture changes.
Refines feature ideas with targeted questions, gathers research signals and acceptance criteria, interviews for decisions, and classifies features in blueprint workflows.
Share bugs, ideas, or general feedback.
Conduct a short adaptive interview to understand the user's intent, then compile a brief and hand off to /team-feature.
/team-feature is fully autonomous — it scans the codebase, determines tech stack, finds patterns, classifies complexity, identifies risks, and plans tasks. It does NOT need help with technical details.
The interview exists for ONE reason: to capture intent and business context that no amount of code scanning can reveal. Every question must pass this test: "Can the AI figure this out from the codebase?" If yes — don't ask.
What AI can infer (DO NOT ASK): tech stack, affected files, architecture, risks, complexity, testing strategy, implementation approach, existing patterns.
What AI cannot infer (ASK): what to build, why it matters, who it's for, what success looks like, what's out of scope.
AskUserQuestionHARD GATE: No implementation until brief is compiled and approved by the user.
Launch researchers to understand the project. Use findings to generate smart button options.
// Launch in parallel:
Task(subagent_type="agent-teams:codebase-researcher",
prompt="Quick scan: tech stack, project structure, existing features (brief list), conventions.")
Task(subagent_type="agent-teams:codebase-researcher",
prompt="Find user-facing features: pages/screens, user roles, key user flows, settings.")
Task(subagent_type="agent-teams:codebase-researcher",
prompt="Architecture layers: data sources, main modules/domains, most complex areas, tests.")
While waiting, greet the user:
I'll ask a few quick questions to make sure I build exactly what you need. Takes 2-3 minutes.
When researchers return, show a brief project summary and use findings to populate button options in subsequent questions.
"What will change when this works — and for whom?"
AskUserQuestion(
questions=[{
"question": "What should change when this feature works? Who will notice the difference?",
"header": "Feature",
"options": [
// Generate 2-3 options from researcher findings if relevant, e.g.:
// {"label": "Improve [existing feature]", "description": "Make [X] work better for [users]"},
// {"label": "Add new [capability]", "description": "Users will be able to [Y]"},
],
"multiSelect": false
}]
)
This question combines WHAT, WHY, and WHO in one. Situational framing forces concrete thinking — "reporting system" becomes "clients can export last month's invoices as PDF."
If the answer is vague (< 15 words or abstract like "improve the dashboard"): Apply Branch B — propose 2-3 hypotheses as buttons:
AskUserQuestion(
questions=[{
"question": "I want to make sure I understand. Which of these is closest?",
"header": "Clarify",
"options": [
{"label": "[Hypothesis A]", "description": "[What this would mean concretely]"},
{"label": "[Hypothesis B]", "description": "[What this would mean concretely]"},
{"label": "[Hypothesis C]", "description": "[What this would mean concretely]"}
],
"multiSelect": false
}]
)
Generate hypotheses from: the user's words + researcher findings about the project.
"Who will use this most?"
Build options from researcher findings (existing user roles, pages, flows):
AskUserQuestion(
questions=[{
"question": "Who will use this most?",
"header": "Audience",
"options": [
// Dynamic from researcher findings, e.g.:
{"label": "All users", "description": "Everyone who uses the product"},
{"label": "[Role from project]", "description": "[Description]"},
{"label": "[Role from project]", "description": "[Description]"}
],
"multiSelect": true
}]
)
"How will you know it's working?"
If Q1 was specific ("clients can export invoices as PDF"), success criteria are implicit — SKIP. If Q1 was abstract ("improve the dashboard"), this question is essential.
AskUserQuestion(
questions=[{
"question": "How will you know this feature is working? What's the clearest sign of success?",
"header": "Success",
"options": [
{"label": "Users can do [X]", "description": "A specific action becomes possible"},
{"label": "Something gets faster/easier", "description": "An existing process improves"},
{"label": "Complaints stop", "description": "A known pain point goes away"},
{"label": "I'll describe it", "description": "I have specific criteria in mind"}
],
"multiSelect": false
}]
)
"Anything that must NOT be included?"
Without explicit exclusions, /team-feature builds the maximum possible scope. Ask this when the feature description sounds broad or could be interpreted expansively.
AskUserQuestion(
questions=[{
"question": "Anything that must NOT be part of this? (Things someone might assume are included but shouldn't be)",
"header": "Exclude",
"options": [
{"label": "Nothing specific", "description": "Build what makes sense"},
{"label": "Don't touch [X]", "description": "Leave [existing area] as is"},
{"label": "I'll list exclusions", "description": "I have specific things to exclude"}
],
"multiSelect": true
}]
)
After core questions, analyze all answers collected so far. If gaps or ambiguities remain, generate additional questions dynamically. These are NOT from a fixed list — generate them based on context.
Analyze answers and researcher findings. Ask a follow-up ONLY when:
Frame every follow-up as a choice with buttons, not an open question. Use AskUserQuestion with 2-4 options that represent the concrete alternatives the AI is weighing.
AskUserQuestion(
questions=[{
"question": "[Generated question about the specific gap]",
"header": "[Short label]",
"options": [
{"label": "[Option A]", "description": "[What this means for the build]"},
{"label": "[Option B]", "description": "[What this means for the build]"},
{"label": "Your call", "description": "Let the team decide"}
],
"multiSelect": false
}]
)
Maximum 2 follow-ups. If still unclear after 6 total questions — compile what is known and note uncertainties in the brief.
# Feature Brief: [descriptive name]
## Intent
[What should change, for whom, why — from Q1 and any clarifications]
## Audience
[Who will use this — from Q2 or inferred from Q1]
## Success Criteria
[Concrete, observable criteria — from Q3 or inferred from Q1.
Each criterion should be verifiable: "User can do X", "Y is visible on screen", "Z happens when...".
These criteria flow into VERIFICATION_PLAN.md as "Business Criteria" during team-feature planning.]
## Exclusions
[What NOT to build — from Q4, or "None specified"]
## Additional Context
[Anything from AI-generated follow-ups]
## Project Context
[Condensed summary from researchers: stack, relevant features, key patterns]
Present the compiled brief to the user, then ask:
AskUserQuestion(
questions=[{
"question": "Here's the plan I'll send to the build team. All correct?",
"header": "Launch?",
"options": [
{"label": "Looks good, launch!", "description": "Save the plan and start building"},
{"label": "I want to adjust", "description": "Let me change something first"}
],
"multiSelect": false
}]
)
If adjustments needed — update brief, show again.
When the user approves, save the brief as a file in the project root:
Write(file_path=".briefs/[feature-name-kebab-case].md", content="{compiled brief}")
The file serves as the source of truth for user intent. /team-feature reads it to understand what to build, and its Success Criteria flow into VERIFICATION_PLAN.md as verifiable business checks.
The brief already contains Project Context from Phase 0 researchers — no need for team-feature to re-research the codebase.
Skill("team-feature", args=".briefs/[feature-name].md --no-research")
This skips codebase-researcher (brief has project context) and reference-researcher (team-feature will check .conventions/ itself). If .conventions/ doesn't exist, team-feature will spawn only reference-researcher as needed.
| User's initial message | Interview path | Total questions |
|---|---|---|
| Detailed description with audience and success criteria | Confirm understanding → Q4 exclusions → Launch | 1-2 |
| Clear feature but missing context | Q1 (refine) → Q3 (success) → Launch | 2-3 |
| Vague idea ("improve the dashboard") | Q1 → Branch B (hypotheses) → Q2 → Q3 → Q4 → Launch | 4-5 |
| Very vague ("make it better") | Q1 → Branch B → Follow-ups → Q3 → Q4 → Launch | 5-6 |
The interview should feel like a 2-minute conversation, not a form.
references/interview-principles.md — Expert principles from Cagan, JTBD, Shape Up, Torres, and rationale for question design