Conduct heuristic evaluations - Nielsen's 10 heuristics, severity ratings, expert review methodology, cognitive walkthrough, and usability inspection.
Conduct expert usability reviews using Nielsen's 10 heuristics and severity ratings. Use this when evaluating interface designs for usability issues before user testing.
/plugin marketplace add melodic-software/claude-code-plugins/plugin install ux-research@melodic-softwareThis skill is limited to using the following tools:
Conduct expert usability reviews using established heuristics and evaluation frameworks.
Use this skill when:
Before answering ANY heuristic evaluation question:
The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
Check for:
Examples:
| Good | Bad |
|---|---|
| Spinner during form submission | Page freezes with no feedback |
| "3 of 5 steps complete" | Multi-step form with no progress |
| "Saved successfully" toast | Silent save with no confirmation |
The system should speak the users' language, with words, phrases, and concepts familiar to the user.
Check for:
Examples:
| Good | Bad |
|---|---|
| "Save" button | "Persist to datastore" |
| Shopping cart icon | Abstract database icon |
| "Your order" | "Transaction #38291" |
Users often choose system functions by mistake and need a clearly marked "emergency exit" to leave the unwanted state.
Check for:
Examples:
| Good | Bad |
|---|---|
| Undo after delete | Immediate permanent deletion |
| "Cancel" in forms | Only "Submit" available |
| X to close modal | Click outside only |
Users should not have to wonder whether different words, situations, or actions mean the same thing.
Check for:
Examples:
| Good | Bad |
|---|---|
| All buttons look like buttons | Inconsistent button styles |
| Same action, same location | "Save" in different places |
| Standard icons (gear = settings) | Custom icons without labels |
Even better than good error messages is a careful design which prevents a problem from occurring.
Check for:
Examples:
| Good | Bad |
|---|---|
| "Are you sure you want to delete?" | Immediate delete on click |
| Date picker (not free text) | Ambiguous date format entry |
| Grayed out unavailable options | Error after selection |
Minimize the user's memory load by making objects, actions, and options visible.
Check for:
Examples:
| Good | Bad |
|---|---|
| Dropdown with options | Must type exact value |
| "Recent searches" | Empty search box |
| Labels on form fields | Placeholder-only inputs |
Accelerators—unseen by the novice user—may often speed up the interaction for the expert user.
Check for:
Examples:
| Good | Bad |
|---|---|
| Ctrl+S to save | Must click menu > Save |
| Drag-and-drop + click | Drag-and-drop only |
| "Select all" checkbox | Must select items one by one |
Dialogues should not contain information which is irrelevant or rarely needed.
Check for:
Examples:
| Good | Bad |
|---|---|
| Key info prominent | Wall of text |
| "Show more" for details | Everything visible at once |
| Clean forms | 50-field forms on one page |
Error messages should be expressed in plain language, precisely indicate the problem, and constructively suggest a solution.
Check for:
Examples:
| Good | Bad |
|---|---|
| "Email format invalid. Example: name@example.com" | "Invalid input" |
| Error shown next to field | All errors at top of page |
| "Try again" or "Contact support" | Just an error message |
Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation.
Check for:
Examples:
| Good | Bad |
|---|---|
| Tooltip on hover | No explanation for icons |
| Inline help text | Separate help document only |
| "What's this?" links | Users must figure it out |
| Rating | Severity | Definition | Action |
|---|---|---|---|
| 0 | Not a problem | I don't agree this is a usability problem | None needed |
| 1 | Cosmetic | Cosmetic problem only; fix if time | Low priority |
| 2 | Minor | Minor usability problem; fix low priority | Medium priority |
| 3 | Major | Major usability problem; important to fix | High priority |
| 4 | Catastrophic | Usability catastrophe; must fix before release | Critical/blocker |
Consider three factors:
| Factor | Question | Levels |
|---|---|---|
| Frequency | How often does the problem occur? | Rare / Occasional / Frequent |
| Impact | How serious when it occurs? | Low / Medium / High |
| Persistence | Can users overcome it easily? | Easy / Moderate / Difficult |
public class SeverityAssessment
{
public required Frequency Frequency { get; init; }
public required Impact Impact { get; init; }
public required Persistence Persistence { get; init; }
public Severity CalculatedSeverity => (Frequency, Impact, Persistence) switch
{
(Frequency.Frequent, Impact.High, Persistence.Difficult) => Severity.Catastrophic,
(Frequency.Frequent, Impact.High, _) => Severity.Major,
(_, Impact.High, Persistence.Difficult) => Severity.Major,
(Frequency.Frequent, Impact.Medium, _) => Severity.Major,
(Frequency.Occasional, Impact.Medium, _) => Severity.Minor,
(Frequency.Rare, Impact.Low, _) => Severity.Cosmetic,
_ => Severity.Minor
};
}
public enum Frequency { Rare, Occasional, Frequent }
public enum Impact { Low, Medium, High }
public enum Persistence { Easy, Moderate, Difficult }
public enum Severity { Cosmetic = 1, Minor = 2, Major = 3, Catastrophic = 4 }
## Heuristic Evaluation Plan
**Product/Feature:** [Name]
**Evaluators:** [List 3-5 evaluators]
**Heuristics:** Nielsen's 10 (or custom set)
**Scope:** [Pages/flows to evaluate]
**Timeline:** [Dates]
### Evaluation Sessions
| Evaluator | Session 1 | Session 2 |
|-----------|-----------|-----------|
| [Name 1] | [Date/Time] | [Date/Time] |
| [Name 2] | [Date/Time] | [Date/Time] |
### Materials
- [ ] Heuristics reference sheet
- [ ] Issue logging template
- [ ] Access to product/prototype
- [ ] Screen recording (optional)
Each evaluator reviews independently:
public class HeuristicIssue
{
public Guid Id { get; init; }
public required string Location { get; init; }
public required string Description { get; init; }
public required int HeuristicNumber { get; init; }
public required string HeuristicName { get; init; }
public required Severity Severity { get; init; }
public string? Screenshot { get; init; }
public string? Recommendation { get; init; }
public Guid EvaluatorId { get; init; }
}
## Issue: [Short Title]
**Location:** [Page/screen/component]
**Heuristic:** #[N] - [Heuristic Name]
**Severity:** [0-4]
### Description
[What is the problem?]
### Evidence
[Screenshot or description of what user sees]
### Impact
[How does this affect users?]
### Recommendation
[Suggested fix]
Merge findings from all evaluators:
public class ConsolidatedIssue
{
public Guid Id { get; init; }
public required string Location { get; init; }
public required string Description { get; init; }
public required int HeuristicNumber { get; init; }
public required List<Severity> IndividualRatings { get; init; }
public Severity AverageSeverity =>
(Severity)Math.Round(IndividualRatings.Average(s => (int)s));
public int EvaluatorCount => IndividualRatings.Count;
public required string Recommendation { get; init; }
}
# Heuristic Evaluation Report
## Executive Summary
**Product:** [Name]
**Evaluation Date:** [Date]
**Evaluators:** [Names]
**Total Issues Found:** [N]
### Issues by Severity
| Severity | Count |
|----------|-------|
| Catastrophic (4) | [N] |
| Major (3) | [N] |
| Minor (2) | [N] |
| Cosmetic (1) | [N] |
### Issues by Heuristic
| Heuristic | Count |
|-----------|-------|
| #1 Visibility of System Status | [N] |
| #2 Match with Real World | [N] |
| ... | ... |
---
## Detailed Findings
### Catastrophic Issues
#### Issue 1: [Title]
- **Location:** [Where]
- **Heuristic:** #[N] - [Name]
- **Severity:** 4 (Catastrophic)
- **Evaluators:** [N]/[Total] identified
**Description:**
[Detailed description]
**Screenshot:**
[Image]
**Impact:**
[Effect on users]
**Recommendation:**
[How to fix]
---
[Continue for each issue, grouped by severity]
---
## Recommendations
### Immediate Action (Severity 4)
1. [Action item]
2. [Action item]
### High Priority (Severity 3)
1. [Action item]
2. [Action item]
### Medium Priority (Severity 2)
1. [Action item]
---
## Methodology
This evaluation followed Nielsen's heuristic evaluation methodology with [N] independent evaluators. Each evaluator conducted two passes through the interface, documenting issues against the 10 usability heuristics. Findings were consolidated and severity ratings averaged.
---
## Appendices
### A: Heuristics Reference
[Full heuristics definitions]
### B: Raw Findings by Evaluator
[Individual evaluator notes]
Cognitive walkthrough focuses on learnability for new users completing specific tasks.
At each step of a task, ask:
Will the user try to achieve the right effect?
Will the user notice that the correct action is available?
Will the user associate the correct action with the effect?
If the correct action is performed, will the user see progress?
## Cognitive Walkthrough: [Task Name]
**Task:** [What user is trying to accomplish]
**User Profile:** [Who is this user?]
**Starting Point:** [Where does the task begin?]
### Step 1: [Action]
| Question | Answer | Issue? |
|----------|--------|--------|
| Will user try to achieve right effect? | [Y/N/Maybe] | [Issue if no] |
| Will user notice correct action? | [Y/N/Maybe] | [Issue if no] |
| Will user associate action with effect? | [Y/N/Maybe] | [Issue if no] |
| Will user see progress? | [Y/N/Maybe] | [Issue if no] |
**Notes:** [Additional observations]
### Step 2: [Action]
[Continue pattern...]
---
## Summary
**Success Likelihood:** [High/Medium/Low]
**Key Barriers:**
1. [Barrier 1]
2. [Barrier 2]
**Recommendations:**
1. [Improvement 1]
2. [Improvement 2]
| Heuristic | Description |
|---|---|
| Accessibility | Usable by people with disabilities |
| Privacy & Security | Users feel their data is safe |
| Personalization | Adapts to user preferences |
| Memorability | Easy to remember how to use |
| Satisfaction | Pleasant, enjoyable experience |
| Efficiency | Tasks completed with minimal effort |
| Learnability | Easy to learn for new users |
usability-testing - User-based evaluationaccessibility-planning - Accessibility-specific reviewdesign-system-planning - Consistency evaluationuser-research-planning - Research method selectionThis skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.