From product-management
User research, persona development, jobs-to-be-done mapping, and opportunity scoring for startup and enterprise PMs — guerrilla interviews, formal research plans, proto-personas, data-backed segmentation, JTBD frameworks, and Opportunity Solution Trees. Use when user asks to "run product discovery", "user research plan", "build personas", or mentions customer interviews, jobs-to-be-done, or opportunity scoring.
npx claudepluginhub lauraflorentin/skills-marketplace --plugin product-managementThis skill uses the workspace's default tool permissions.
Research outputs are hypotheses until validated with real users. Always test assumptions with actual customer behavior, not just stated preferences.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Designs, implements, and audits WCAG 2.2 AA accessible UIs for Web (ARIA/HTML5), iOS (SwiftUI traits), and Android (Compose semantics). Audits code for compliance gaps.
Research outputs are hypotheses until validated with real users. Always test assumptions with actual customer behavior, not just stated preferences.
User research is how you replace opinions with evidence. Every product decision -- what to build, who to build it for, how it should work -- is better when informed by direct contact with real users. The biggest risk in product development is not building the wrong thing; it is building the wrong thing confidently because nobody talked to a customer.
The depth of research should match your stage. A startup with 0 customers needs fast, scrappy signal. An enterprise team with 50,000 users needs rigorous, statistically defensible insights. Both approaches produce valid evidence if executed well.
For early-stage products with limited budget and no dedicated research team, guerrilla research gives you actionable insights in days, not months. The goal is speed and directional accuracy -- you are not publishing a peer-reviewed paper; you are reducing the risk of building something nobody wants.
Intercept Interviews:
Go where your target users already are. Do not wait for them to come to you.
Unmoderated Surveys:
Use Typeform, Google Forms, or Tally to collect structured data at scale. Keep surveys under 10 questions -- completion rates drop sharply after question 10. Rules for effective surveys:
5-User Tests:
Jakob Nielsen's research demonstrates that 5 users find approximately 85% of usability issues. You do not need 50 users for a usability test. Run a quick test with 5 people, fix the major issues, then test again with 5 more if needed.
1-Week Research Sprint Protocol:
Use this when you need to go from zero insight to actionable findings in 5 business days.
| Day | Activity | Output |
|---|---|---|
| Monday | Define research question. Write 5-7 interview questions. Identify 8-10 target participants (recruit more than you need -- expect 30-40% no-shows). | Research brief (1 page), interview guide, participant list |
| Tuesday | Conduct 3-4 intercept or scheduled interviews (30 min each). Record with consent. Take rough notes during, detailed notes immediately after. | Raw interview notes, recordings |
| Wednesday | Conduct 3-4 more interviews. Launch a short unmoderated survey (5-7 questions) to triangulate interview findings with broader data. | Raw interview notes, survey live |
| Thursday | Synthesize findings. Affinity-map key observations (see Synthesis Methods below). Write insight statements. Close survey and analyze responses. | Affinity map, 3-5 insight statements |
| Friday | Create a 1-page findings summary. Present to team (15-30 min). Decide on next actions. | Findings summary, decision log |
Tools on a budget: Google Meet or Zoom for remote interviews (free tier), Google Forms or Tally for surveys (free), Loom for async recording (free for 25 videos), Miro or FigJam for affinity mapping (free tiers available), Notion or Google Docs for research repository (free).
For products with established user bases, dedicated budgets, and stakeholders who require methodological rigor, formal research provides the defensible evidence needed to drive organizational decision-making.
Research Plan Template:
Every formal research initiative should start with a research plan approved by stakeholders before fieldwork begins. This prevents scope creep, misaligned expectations, and wasted budget.
RESEARCH PLAN
==============
Project: _______________
Date: _______________
Researcher: _______________
Sponsor/Stakeholder: _______________
1. OBJECTIVE
What specific question(s) are we trying to answer?
- Primary:
- Secondary:
2. BACKGROUND
What do we already know? What triggered this research?
[2-3 sentences of context]
3. METHODOLOGY
[ ] Moderated 1:1 interviews (N = ___)
[ ] Unmoderated usability testing (N = ___)
[ ] Survey (N = ___)
[ ] Diary study (N = ___, duration: ___ days)
[ ] Contextual inquiry / field observation (N = ___)
[ ] Card sort / tree test (N = ___)
Justification for chosen method:
4. PARTICIPANT CRITERIA
Target profile:
Screener questions:
Recruiting source:
Incentive: $___
5. TIMELINE
| Phase | Dates | Deliverable |
|-------|-------|-------------|
| Planning & recruiting | Week 1-2 | Screener, guide, participants confirmed |
| Fieldwork | Week 3-4 | Raw data collected |
| Analysis | Week 5 | Coded data, themes identified |
| Readout | Week 6 | Findings deck, recommendations |
6. BUDGET
| Item | Cost |
|------|------|
| Participant incentives (N x $___) | $___ |
| Recruiting fees | $___ |
| Tools / software | $___ |
| Total | $___ |
7. DELIVERABLES
- [ ] Executive summary (1 page)
- [ ] Full findings report
- [ ] Video highlight reel (5-10 min)
- [ ] Recommendation matrix (prioritized actions)
8. RISKS AND LIMITATIONS
- [e.g., "Recruiting may skew toward power users"]
- [e.g., "Self-reported data may not reflect actual behavior"]
Recruiting Criteria Template:
Careful recruiting is the difference between useful research and misleading data. Recruit participants who represent your actual or target user base, not just people who are easy to find.
PARTICIPANT SCREENER
=====================
Study: _______________
Target N: ___ participants
DEMOGRAPHICS:
- Age range: ___
- Location: ___
- Job title / role: ___
- Company size: ___
- Industry: ___
BEHAVIORAL CRITERIA:
- Frequency of [relevant behavior]: ___
- Current tools used for [task]: ___
- Experience level with [domain]: ___
- Recency of [relevant experience]: ___
SCREENING QUESTIONS:
1. [Question to verify they match the target profile]
PASS: [acceptable answers]
FAIL: [disqualifying answers]
2. [Question to assess relevant experience]
PASS:
FAIL:
3. [Question to check for bias/conflict]
PASS:
FAIL:
EXCLUSION CRITERIA:
- Employees of competitors
- Participants in a study within the last [6 months]
- Professional research participants ("panelists")
- [Other relevant exclusions]
INCENTIVE: $___ | Format: [gift card / bank transfer / donation]
Moderated 1:1 Interview Protocol (60 Minutes):
This is the gold standard for qualitative research. One researcher, one participant, a structured conversation that uncovers motivations, behaviors, and unmet needs.
| Phase | Duration | Purpose | Activities |
|---|---|---|---|
| Introduction | 5 min | Build rapport, set expectations, get consent | Introduce yourself and the purpose. Explain there are no wrong answers. Ask permission to record. Have them sign consent form if required. |
| Context | 10 min | Understand their world | Ask about their role, responsibilities, typical day. "Tell me about your role and what a typical week looks like." Establish the context before diving into specifics. |
| Exploration | 30 min | Discover behaviors, motivations, pain points | Use open-ended questions (see Interview Guide below). Follow the participant's lead. Probe interesting threads with "Tell me more about that" and "Why?" |
| Deep Dive | 10 min | Explore specific scenarios in detail | Ask them to walk through a specific recent experience step by step. "Can you show me how you did that?" Ask about workarounds, frustrations, and what they wish existed. |
| Wrap-up | 5 min | Close gracefully, capture final thoughts | "Is there anything I should have asked but didn't?" "What's the one thing you'd want us to know?" Thank them, explain next steps, provide incentive. |
Research Repository Structure:
As research accumulates, it must be organized, searchable, and accessible to the entire product team. A research repository prevents redundant studies and enables teams to build on past findings.
/research-repository
/[year]
/[project-name]
/plan.md -- Research plan
/screener.md -- Recruiting criteria
/guide.md -- Interview guide
/raw-data/ -- Transcripts, recordings, survey exports
/analysis/ -- Affinity maps, coded data
/findings.md -- Final report
/highlight-reel/ -- Video clips (2-3 min each)
/tags/ -- Cross-project tag index
onboarding.md -- Links to all studies touching onboarding
pricing.md -- Links to all studies touching pricing
enterprise.md -- Links to all studies with enterprise users
/insights-library.md -- Running log of validated insight statements
Tag taxonomy: Create a shared set of tags (15-25 tags is usually sufficient) that span product areas, user types, and themes. Apply tags consistently across projects. Review and prune the taxonomy quarterly.
Stakeholder Readout Format:
Research findings must be communicated in a format that drives action. A 60-page report that no one reads is worse than no research at all.
RESEARCH READOUT
=================
Study: _______________
Date: _______________
Researcher: _______________
Audience: _______________
EXECUTIVE SUMMARY (1 page max):
[3-4 sentences: What we studied, what we found, what it means]
KEY FINDINGS (3-5 findings):
Finding 1: [Statement]
Evidence: [What we observed / data points]
Frequency: [How many participants / what % of survey]
Confidence: [High / Medium / Low]
Finding 2: [Statement]
Evidence:
Frequency:
Confidence:
[... repeat for each finding]
IMPLICATIONS:
| Finding | Implication for Product | Implication for Design | Implication for GTM |
|---------|------------------------|----------------------|---------------------|
| | | | |
RECOMMENDED ACTIONS:
| Action | Priority (H/M/L) | Owner | Timeline |
|--------|-------------------|-------|----------|
| | | | |
OPEN QUESTIONS:
- [Questions this research raised but did not answer]
- [Suggested follow-up studies]
APPENDIX:
- Methodology details
- Participant demographics
- Full data tables
A well-structured interview guide ensures consistency across sessions while leaving room for natural conversation. Never read questions verbatim -- use them as a compass, not a script.
Opening (5 minutes):
Build rapport, set expectations, and get consent.
Exploration (30 minutes):
Use open-ended questions that invite stories, not yes/no answers.
10 Sample Questions:
Deep Dive (10 minutes):
When a participant mentions something interesting, go deeper.
Closing (5 minutes):
Raw research data is not insight. Synthesis is the process of transforming observations into actionable understanding. Do not skip this step -- it is where the real value is created.
Affinity Mapping:
Affinity mapping is the most widely used synthesis method. It works for any qualitative data: interview notes, survey responses, support tickets, usability observations.
Thematic Coding:
For rigorous, repeatable analysis -- especially when working with transcripts from 10+ interviews.
Insight Statement Formula:
Transform observations into actionable insight statements using this structure:
"We observed [behavior] because [motivation], which means [implication]."
Worked Example:
Observations from 8 interviews with mid-market product managers:
Cluster: "Users build parallel tracking systems outside the product."
Insight statement:
"We observed that 6 out of 8 product managers maintain a separate system (spreadsheets, docs, or internal tools) to track customer feedback, because their primary PM tool doesn't connect feedback to roadmap items in a way they trust for prioritization decisions, which means there is a significant opportunity to build integrated feedback-to-roadmap workflows that eliminate the need for manual aggregation."
This insight is specific (6 of 8), explains the motivation (trust gap), and points to a clear product opportunity (integrated workflows).
These are the most common ways research goes wrong. Watch for them in your own work and when reviewing others' research.
For detailed templates, frameworks, and field-level guidance, read:
references/discovery-reference.md — Complete framework details, templates, and examplesRead this file when the task requires: