From ux-researcher
Review a product, feature, or flow for usability issues using Nielsen's 10 heuristics. Produces a prioritised findings table with severity ratings and specific fix recommendations. Use for experience audits, friction identification, or pre-launch UX review.
npx claudepluginhub hpsgd/turtlestack --plugin ux-researcherThis skill is limited to using the following tools:
Review $ARGUMENTS against [Nielsen's 10 usability heuristics](https://www.nngroup.com/articles/ten-usability-heuristics/) using the mandatory process below.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Analyzes BMad project state from catalog CSV, configs, artifacts, and query to recommend next skills or answer questions. Useful for help requests, 'what next', or starting BMad.
Review $ARGUMENTS against Nielsen's 10 usability heuristics using the mandatory process below.
Before evaluating heuristics, walk the flow as a user would:
### Scope
**Feature/flow:** [what is being reviewed]
**Target user:** [who uses this — reference a persona if available]
**Entry point:** [where the user starts]
**Success state:** [what "done" looks like for the user]
**Walkthrough notes:**
1. [First thing the user sees/does — note friction or delight]
2. [Second interaction — note any confusion]
3. [Continue through the complete flow]
Rules for walkthrough:
Output: Scope definition and annotated walkthrough notes.
For each of Nielsen's 10 heuristics, check the flow for violations:
| # | Heuristic | What to check | Common violations |
|---|---|---|---|
| 1 | Visibility of system status | Loading states, progress indicators, confirmation messages, sync status | Missing spinners, no feedback on save, silent failures, stale data indicators |
| 2 | Match between system and real world | User's language, familiar concepts, natural ordering | Internal jargon, technical error codes, unintuitive ordering, developer vocabulary |
| 3 | User control and freedom | Undo, back, escape, cancel, redo | No undo, modal traps, irreversible one-click actions, no way to cancel in-progress operations |
| 4 | Consistency and standards | Same action = same result, same term = same meaning, platform conventions | Different buttons for same action, inconsistent terminology, non-standard icons |
| 5 | Error prevention | Confirmation for destructive actions, smart defaults, input constraints | Easy misclicks, no confirmation on delete, no input masks, ambiguous defaults |
| 6 | Recognition over recall | Visible options, breadcrumbs, recent history, contextual help | Hidden features, memory-dependent workflows, no context clues, buried settings |
| 7 | Flexibility and efficiency | Keyboard shortcuts, bulk operations, customisation, expert paths | Mouse-only interactions, one-at-a-time operations, no shortcuts for power users |
| 8 | Aesthetic and minimalist design | Every element earns its place, information hierarchy, visual clarity | Decorative clutter, unused UI elements, information overload, poor hierarchy |
| 9 | Help users recognise and recover from errors | Clear message, cause, recovery action | "Error 500", "Invalid input" with no guidance, blame language, no recovery path |
| 10 | Help and documentation | Contextual help, searchable docs, tooltips, onboarding | No in-context help, docs buried or outdated, no search, tooltips on wrong elements |
For each violation found:
### Finding: [short description]
**Heuristic:** [number and name]
**Location:** [exact screen/component/interaction]
**Severity:** [Critical / Major / Minor / Enhancement]
**What happens:** [what the user experiences]
**Why it's a problem:** [impact on task completion or user confidence]
**Recommendation:** [specific fix — not "improve this" but "add a loading spinner that appears after 200ms"]
Output: Individual findings with severity, location, and specific recommendations.
Apply consistent severity ratings:
| Severity | Criteria | Impact | Example |
|---|---|---|---|
| Critical | Users cannot complete the task at all | Task failure | Form submits but nothing happens, no error shown |
| Major | Users struggle significantly or abandon frequently | High friction, potential churn | 5-step process to do a common action, confusing error messages |
| Minor | Users notice but find a workaround | Annoyance, reduced confidence | Inconsistent button placement, missing breadcrumbs |
| Enhancement | Users would benefit but aren't blocked | Opportunity for delight | No keyboard shortcuts for frequent actions, no bulk operations |
Rules for severity:
Output: All findings with consistent severity ratings.
### Findings summary
| # | Heuristic | Severity | Finding | Location | Fix |
|---|---|---|---|---|---|
| 1 | [heuristic name] | Critical | [what's wrong] | [where] | [specific recommendation] |
| 2 | [heuristic] | Major | [finding] | [location] | [fix] |
| ... | | | | | |
### Severity distribution
- **Critical:** [count] — must fix before launch/release
- **Major:** [count] — fix in current cycle
- **Minor:** [count] — fix when touching the area
- **Enhancement:** [count] — backlog
### Heuristic coverage
| Heuristic | Findings | Worst severity |
|---|---|---|
| 1. Visibility of system status | [count] | [severity or "No issues"] |
| 2. Match real world | [count] | [severity] |
| ... | | |
### Top 3 Recommendations (by impact)
1. **[Highest impact fix]** — [which findings it addresses, expected improvement]
2. **[Second]** — [detail]
3. **[Third]** — [detail]
Output: Prioritised findings table, severity distribution, heuristic coverage, and top 3 recommendations.
## Usability Review: [scope]
### Scope and Walkthrough
[From Step 1]
### Findings
[Prioritised table from Step 4]
### Severity Distribution
[Counts from Step 4]
### Heuristic Coverage
[Coverage table from Step 4]
### Top 3 Recommendations
[Prioritised fixes from Step 4]
### What Works Well
- [1–2 positive observations]
/ux-researcher:persona-definition — define the target user before reviewing. A usability review is more valuable when done from a specific persona's perspective./ux-researcher:journey-map — for mapping the end-to-end experience across multiple touchpoints, not just a single flow./ui-designer:accessibility-audit — for accessibility-specific evaluation (WCAG compliance), which complements heuristic review.