From daffy0208-ai-dev-standards
Conduct user research and validation. Use when discovering user needs, validating assumptions, creating personas, or understanding pain points. Covers interviews, surveys, analysis, and synthesis.
npx claudepluginhub joshuarweaver/cascade-content-creation-misc-1 --plugin daffy0208-ai-dev-standardsThis skill uses the workspace's default tool permissions.
Understand user needs through systematic research before building products.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Understand user needs through systematic research before building products.
Users are not you. Validate assumptions with real user behavior, not opinions or what users say they'll do.
Goal: Define what you need to learn and how
Activities:
Research Questions Examples:
Validation:
Goal: Find and schedule representative participants
Recruitment Sources:
Screening Criteria:
Compensation:
Sample Size:
Validation:
Goal: Gather rich user insights through chosen methods
User Interviews (Primary method):
Interview Structure (30-60 minutes):
Good Interview Questions:
✅ Open-ended:
- "Tell me about the last time you [task]."
- "Walk me through your process for [activity]."
- "What's the most frustrating part of [workflow]?"
- "How do you currently solve [problem]?"
❌ Leading questions (avoid):
- "Would you use a feature that...?" (Everyone says yes)
- "Don't you think it would be better if...?" (Confirming bias)
- "How much would you pay for this?" (Hypothetical)
Ask "Why" Five Times:
User: "I use Excel for tracking leads."
You: "Why Excel specifically?"
User: "It's what I know."
You: "Why is familiarity important?"
User: "Learning new tools takes time."
You: "Why is time a concern?"
User: "I'm measured on closed deals, not tool expertise."
→ Root insight: Avoid tools with steep learning curves
Contextual Inquiry:
Surveys (for quantitative validation):
Validation:
Goal: Identify patterns, themes, and insights from raw data
Affinity Diagramming:
Common Themes to Look For:
Jobs-to-be-Done (JTBD) Framework:
When [situation],
I want to [motivation],
So I can [expected outcome].
Example:
When preparing for a client meeting,
I want to quickly find all previous conversations,
So I can provide personalized recommendations without looking unprepared.
Analysis:
- Functional job: Find information quickly
- Emotional job: Appear competent
- Social job: Demonstrate attentiveness
User Segmentation (by behavior, not demographics):
Validation:
Goal: Communicate findings in actionable formats
1. User Personas (3-5 evidence-based profiles):
persona_name: 'Sarah the Sales Manager'
role: 'Regional Sales Manager'
demographics:
experience_level: 'Intermediate (5 years)'
team_size: '12 sales reps'
goals:
- Track team performance in real-time
- Coach underperforming reps effectively
pain_points:
- Data scattered across 3 systems
- Can't see at-risk deals until too late
current_tools:
- 'Salesforce: CRM tracking'
- 'Excel: Custom reports (2 hrs/week)'
behaviors:
- Checks dashboard first thing every morning
- Spends 2 hours weekly compiling reports manually
quote: "I feel like I'm flying blind until the end of the quarter"
opportunity: 'Unified dashboard with predictive risk scoring'
2. Journey Maps (current-state experience):
Stages: Awareness → Research → Purchase → Onboarding → Usage → Support
For each stage:
- Actions: What users do
- Pain points: Frustrations and blockers
- Emotions: How users feel (frustrated, confident, confused)
- Opportunities: Where to improve
3. Research Report:
4. Opportunity Areas (prioritized problems):
| Opportunity | Impact | Effort | Priority |
|-------------|--------|--------|----------|
| Unified dashboard | High | Medium | P0 |
| Predictive alerts | High | High | P1 |
| Mobile access | Medium | Low | P1 |
Validation:
What users do > what they say they do > what they say they'll do
Surface root causes and motivations, not symptoms
Include edge cases, power users, and struggling users—not just ideal customers
Ask "Tell me about..." not "Would you like..."
Not a one-time phase—continue throughout product lifecycle
Test riskiest assumptions first with minimal investment
❌ Talking to friends and family → They'll tell you what you want to hear ❌ Asking hypothetical questions → "Would you use...?" is not predictive ❌ Leading questions → "Don't you think...?" confirms your bias ❌ Only talking to early adopters → They're not representative ❌ Skipping synthesis → Raw data isn't insights ❌ Ignoring negative feedback → Pay extra attention to criticism ❌ One-time research → User needs change, research continuously
research_summary:
objectives:
- '<key question 1>'
- '<key question 2>'
participants:
total: <number>
segments:
- name: '<segment>'
count: <number>
methods:
- 'User interviews (12 participants)'
- 'Survey (87 responses)'
key_insights:
- insight: '<finding>'
evidence: '<quote or data>'
impact: 'high/medium/low'
personas:
- name: '<persona name>'
goals: ['<goal>']
pain_points: ['<pain>']
opportunities:
- opportunity: '<problem to solve>'
impact: 'high'
effort: 'medium'
priority: 'P0'
recommendations:
- '<action item 1>'
- '<action item 2>'
Related Skills:
product-strategist - For validating product-market fitux-designer - For creating designs based on researchmvp-builder - For prioritizing features from researchRelated Patterns:
META/DECISION-FRAMEWORK.md - Research method selectionSTANDARDS/best-practices/user-research-ethics.md - Research ethics (when created)Related Playbooks:
PLAYBOOKS/conduct-user-interviews.md - Interview procedure (when created)PLAYBOOKS/synthesize-research-findings.md - Analysis workflow (when created)