From team-of-agents
Use when conducting user research, creating personas or journey maps, planning usability testing, synthesising user feedback, evaluating information architecture, analysing user behaviour, or any task that requires understanding what users need, think, or feel.
npx claudepluginhub pranav8494/team-of-agentsThis skill uses the workspace's default tool permissions.
```
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Checks Next.js compilation errors using a running Turbopack dev server after code edits. Fixes actionable issues before reporting complete. Replaces `next build`.
Guides code writing, review, and refactoring with Karpathy-inspired rules to avoid overcomplication, ensure simplicity, surgical changes, and verifiable success criteria.
Share bugs, ideas, or general feedback.
Research questions before methods. Never start with "let's do some user interviews."
Start with "what decision will this research inform?" The question determines the method.
What users say and what users do are both data — and they frequently contradict each other.
Use this table to determine what to produce for each task type:
| User asks for | What to produce |
|---|---|
| Research plan / brief | Research brief: decision to inform, researchable question(s), selected method with rationale, participant criteria, timeline, expected deliverables |
| Discussion guide | Full guide using the structure below: intro, warm-up, core tasks/questions with probes, debrief; task scenarios written as realistic situations not instructions |
| Usability test plan | Moderated test plan with 5 tasks per segment, task scenarios, observation rubric, and severity rating scale |
| Synthesis / findings report | Themes from affinity diagramming, each theme supported by observation counts and direct quotes; finding → implication → recommendation structure; observations kept separate from interpretations |
| Persona | Behaviour-based persona (not demographic): goals, context, pain points, and representative quotes — grounded in research data, not assumptions |
| Journey map | Current-state journey: steps, channel, emotion curve, pain points, and opportunity labels at each stage |
| Survey design | Survey questions mapped to research questions; mix of Likert, multiple-choice, and open text; SUS or NPS items where applicable; sample size guidance |
| IA evaluation | Card sort or tree test plan with task scenarios; output includes agreement matrix, dendrograms (card sort) or task success rates (tree test) with interpretation |
| Research repository entry | Structured finding card: question answered, method, date, key themes, raw evidence links, implications |
| Research question | Stage | Method | Sample size |
|---|---|---|---|
| What are users trying to accomplish and why? | Generative / Discovery | User interviews (semi-structured) | 8–12 |
| What do users actually do in their environment? | Generative | Contextual inquiry / field study | 5–8 |
| Can users complete this task without help? | Evaluative | Moderated usability test | 5 per segment (reveals ~85% of issues) |
| How do users think about the information structure? | Evaluative / IA | Card sorting (open or closed) | 15–30 |
| Does this navigation structure work? | Evaluative / IA | Tree testing | 50–100 |
| How do users experience the product over time? | Generative / Longitudinal | Diary study | 10–20 |
| What do users think about [topic] at scale? | Quantitative | Survey (CSAT, SUS, NPS, custom) | 100+ for significance |
| Is version A or B better on a measurable metric? | Evaluative | A/B test | Statistically powered; consult data analyst |
Five users find ~85% of usability issues (Nielsen, 1993). For usability testing, 5 participants per distinct user segment is sufficient. More participants are needed only if you are segmenting by user type.
Translate product/business questions into researchable questions before choosing a method:
| Product question | Researchable question |
|---|---|
| "Why are users dropping off at step 3?" | "What obstacles do users encounter when trying to complete the payment flow?" |
| "Should we add X feature?" | "What jobs are users trying to do that X might help with? What are they doing today instead?" |
| "Is our onboarding working?" | "Can new users complete [key task] within [time] without help? Where do they get stuck?" |
| "What do users think of the redesign?" | "When shown the new design, which tasks can users complete, and where do they struggle?" |
Never start from "let's validate our idea." Start from "what would change our decision if we found it?"
Structure discovery interviews around the job, not the feature:
When [situation], I want to [motivation], so I can [expected outcome].
Interview approach (JTBD):
JTBD reveals what users are hiring your product to do. Users' proposed solutions ("add a feature that does X") are often wrong; their underlying job is always real.
## Study: [Name]
## Duration: [Minutes]
## Research questions: [What are we trying to learn?]
### Introduction (5 min)
- Welcome, consent, recording permission
- "There are no right or wrong answers — we're testing the design, not you"
- "Please think aloud as you work through tasks"
### Warm-up (5 min)
- [Background questions to understand the participant's context]
- "Tell me about how you typically [relevant activity]..."
### Core tasks / questions (30–40 min)
Task 1: [Specific, realistic scenario — not "click the button"]
- Observation notes:
- Follow-up probes: "What did you expect to happen?" / "What would you do next?"
Task 2: ...
### Debrief (5 min)
- "Is there anything about the experience we haven't covered that you'd like to mention?"
- "On a scale of 1–10, how difficult was [core task]? Why?"
Good task scenarios are realistic, not instructional:
| Type | Example |
|---|---|
| Observation (factual) | "The participant paused for 8 seconds before clicking 'Submit'" |
| Interpretation (analytical) | "The participant appeared uncertain about what would happen next" |
| Quote (verbatim) | "I don't know what this button does" |
| Theme (synthesised) | "Users lacked confidence at the payment confirmation step" |
Always keep observations and interpretations separate. Multiple researchers can interpret the same observation differently — separating them makes disagreement visible and productive.
Affinity diagramming process:
| Deliverable | When to produce | Key content |
|---|---|---|
| Research brief | Before study begins | Research question, method, sample criteria, timeline |
| Discussion guide | Before sessions | Warm-up, tasks, probes, debrief |
| Raw notes / transcript | After each session | Verbatim quotes, observations, timestamps |
| Findings report | After synthesis | Themes, evidence, implications, recommendations |
| Persona | After discovery research | Behaviour-based (not demographic); goals, context, pain points |
| Journey map | After discovery research | Current-state steps, emotions, pain points, opportunities |
| Usability report | After usability testing | Task completion rates, errors, severity ratings, prioritised issues |
Structure every findings document:
Example:
End every response with a confidence signal on its own line:
CONFIDENCE: [High|Medium|Low] — [one-line reason]
If the task is outside this skill's scope or you lack the information needed to proceed, return this instead of a confidence signal:
BLOCKED: [reason] — [what information would unblock this]
Do not guess or produce low-quality output to avoid returning BLOCKED. A precise BLOCKED is more useful than a low-confidence guess.