npx claudepluginhub xoai/sageThis skill uses the workspace's default tool permissions.
<!-- sage-metadata
Generates structured user discovery interview guides with screener questions, discussion guides, and synthesis frameworks for user interviews, customer discovery, JTBD research, or problem validation.
Generates user research briefs with testable hypotheses, 30-min interview guides, and Mom Test-compliant open-ended questions from a topic or hypothesis.
Designs structured interview guides, surveys, and JTBD probes avoiding research biases for user discovery, product validation, and market research.
Share bugs, ideas, or general feedback.
Design complete interview research packages that help PMs learn what they need to learn, efficiently and without bias. The skill plans the research and provides supporting materials — the PM conducts the interviews.
Deliverable type: document
FIX (update): Adjust an existing research brief or interview guide — add/remove questions based on pilot interview feedback, update screener criteria after recruitment difficulty, revise analysis framework after early findings. Don't redesign the study. Minutes.
BUILD (light): Quick guide for rapid validation. One research objective, 5-8 core questions, basic screener (2-3 questions), simplified analysis framework (themes to watch for + completion criteria). Skip detailed logistics and the full synthesis template. Produces a guide the PM can use for 3-5 quick interviews. 10-15 minutes.
ARCHITECT (full): Complete research package. Full brief with objectives tied to upstream artifacts, detailed screener (4-6 questions), 8-12 core questions with probes and section-to-objective mapping, full analysis framework with synthesis template, logistics timeline, and follow-up study recommendations. 30 minutes.
The skill works best with a clear research need. Accepted inputs:
If the user has no specific research need, help them identify one by reviewing upstream artifacts for low-confidence claims or open questions.
Confirm with the user in ONE message:
Read first: references/interview-methodology.md (Interview Types)
Based on the research need, recommend the appropriate type:
| Research Need | Interview Type |
|---|---|
| "We need to understand how people experience this problem" | Discovery interview |
| "We need to understand why users switched to/from our product" | Switch interview |
| "We need to see how users actually use the product" | Contextual inquiry |
| "We need feedback on this concept/prototype before building" | Evaluative interview |
If the need could be served by multiple types, recommend the one that best answers the primary research question and explain the trade-off.
If the need is better served by quantitative research (survey, A/B test, behavioral analytics), say so: "This question is better answered by [quantitative method] because [reason]. The user-interview skill covers qualitative methods. [Recommendation for next steps]."
Define the study parameters:
Research objectives:
Participant criteria:
Screener design:
Read first: references/interview-guide-patterns.md
Select the appropriate pattern for the chosen interview type and customize it to the specific research objectives:
Map research objectives to guide sections. Each section of the guide should connect to a specific objective. If a section doesn't serve an objective, remove it.
Write 8-12 core questions. Not more. The rest of the time is for follow-up probes that emerge from the conversation. A guide with 25 questions becomes an interrogation.
Sequence questions as a conversation. Start broad and open (context, current behavior), move to specific (pain points, decisions), end with reflection (what would they change, what did we miss).
Write probes for each question. Not a script — prompts for the interviewer when they need to dig deeper. "Tell me more about that," "Can you give a specific example?"
Include the opening and closing scripts. The opening sets the tone (no right answers, learning from them). The closing captures what the guide missed ("anything I should have asked?").
Read first: references/interview-methodology.md (Synthesis section)
Define how the PM will make sense of the interview data:
Theme categories: Based on research objectives, suggest 3-5 theme categories to watch for. But emphasize: let themes emerge from the data, don't force observations into predetermined buckets.
Synthesis process: Step-by-step instructions for going from raw interview notes to actionable findings.
Completion criteria: How does the PM know when they've learned enough? Typically: primary question answered with evidence from ≥N participants, and thematic saturation reached (last 2 interviews added no new themes).
Connection back to upstream: How findings feed back into the JTBD analysis, opportunity map, or PRD. "If interviews confirm [hypothesis], update opportunity O2 from 'monitor' to 'pursue.' If they disconfirm it, update the JTBD outcome score."
Before presenting, validate:
Save to .sage/docs/research-brief-<slug>.md using the
template from templates/research-brief-template.md.
Append key decisions to .sage/decisions.md recording the
research objectives, participant criteria, and what upstream claims this
study validates. Update the "Current Artifacts" section.
Present to user: "Here's the research brief for [study name]. It uses [interview type] with [N] participants targeting [segment]. The primary question is: '[question]'. The guide has [N] core questions covering [sections]. Want to review the questions or adjust the scope?"
MUST:
references/interview-methodology.md and
references/interview-guide-patterns.md before designing any studySHOULD:
MAY:
User has no specific research question: Don't produce a generic study. Help the PM identify what they need to learn: "What decision are you trying to make? What's the biggest uncertainty?" If they still can't articulate it, recommend reviewing the JTBD or opportunity map for low-confidence claims.
Research question is better suited to quantitative methods: If the PM needs "how many users do X" or "which of these two options performs better in production," interviews aren't the right tool. Flag it: "This question needs quantitative data — a survey (n≥100) or an A/B test would give you reliable numbers. Interviews tell you why, not how many."
Target segment is undefined: If the PM can't describe who to interview beyond "our users," the segment definition is too vague. Recommend running the JTBD skill to define the job performer first, or at minimum establish behavioral criteria: "users who have done [specific behavior] in the last [timeframe]."
PM wants to test everything in one study: A guide with 4 research objectives and 25 questions will produce shallow answers to everything and deep answers to nothing. Push for focus: "Which of these questions is the highest-stakes decision? Let's design the study around that one, and address the others in a follow-up."
Guide becomes an interrogation: If the question count exceeds 12, the guide is too long. Cut by asking: "Which of these questions could be answered by the probes from other questions?" Often the answer to Q7 naturally emerges from the follow-up to Q4.