Use this skill whenever the user asks for a UX audit, UI review, usability evaluation, heuristic evaluation, UX scoring, interface audit, accessibility audit, UX report, or wants to evaluate any existing interface against best practices. Also use when the user says "check the UX", "review this UI", "is this accessible", "score this page", or asks for feedback on an implemented interface. If someone has built something and wants to know if the UX is good, this skill provides the evaluation framework.
From ux-expertnpx claudepluginhub williamfontaine/claude-plugins-marketplace --plugin ux-expertThis skill uses the workspace's default tool permissions.
references/audit-checklist.mdreferences/report-template.mdreferences/severity-scoring.mdGuides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
This skill provides a structured methodology for evaluating user interfaces against established usability heuristics, accessibility standards, and UX best practices. It equips the practitioner with severity rating scales, a multi-dimensional scoring rubric, prioritization frameworks, and a complete report template to deliver actionable, evidence-based audit findings.
A UX audit is a systematic evaluation of a digital product's user experience. Unlike usability testing (which observes real users), a heuristic audit is conducted by an evaluator who measures the interface against codified principles — primarily Nielsen's 10 Usability Heuristics, WCAG 2.2 accessibility guidelines, and domain-specific best practices for the product category (SaaS dashboard, landing page, admin portal, etc.).
The output of a UX audit is a prioritized list of findings, each rated by severity, accompanied by specific recommendations and an overall UX health score. This skill covers the full lifecycle: scoping the audit, conducting the analysis, scoring findings, generating the report, and prioritizing fixes.
Use this scale to classify every issue discovered during the audit:
| Rating | Label | Description | Required Action |
|---|---|---|---|
| 0 | Not a problem | Heuristic violation detected but no real user impact | Document only |
| 1 | Cosmetic | Minor visual inconsistency; does not affect task completion | Fix if time allows |
| 2 | Minor | Users are slowed down but can still complete tasks | Schedule for next sprint |
| 3 | Major | Users frequently fail or get stuck; significant frustration | Fix before next release |
| 4 | Catastrophic | Users cannot complete critical tasks; data loss risk; security issue | Fix immediately |
Three factors determine severity:
Refer to references/severity-scoring.md for the full severity and prioritization framework, including the Severity x Frequency matrix, RICE scoring, Impact-Effort matrix, and MoSCoW method.
Score each dimension from 1 (Poor) to 10 (Excellent). The sum produces a Global UX Score out of 100.
| # | Dimension | 1-3 (Poor) | 4-6 (Adequate) | 7-10 (Excellent) |
|---|---|---|---|---|
| 1 | Navigation | Users cannot find core features | Users find features with some searching | Users find everything instantly |
| 2 | Clarity | Users do not understand what to do | Users figure it out with effort | Intent is immediately clear |
| 3 | Feedback | No response to user actions | Basic loading/success states | Rich, contextual feedback for every action |
| 4 | Error Handling | Errors cause data loss or dead ends | Errors caught with generic messages | Errors prevented; specific recovery guidance |
| 5 | Consistency | Every page feels like a different app | Most patterns are consistent | Complete pattern consistency |
| 6 | Accessibility | Keyboard unusable, no alt text | Partial WCAG compliance | Full WCAG 2.2 AA compliance |
| 7 | Performance | LCP > 5s, significant CLS | LCP 2.5-4s, some layout shift | LCP < 2.5s, CLS < 0.1, FID < 100ms |
| 8 | Mobile | Broken on mobile | Functional but awkward on mobile | Native-quality mobile experience |
| 9 | Visual Design | Cluttered, no hierarchy | Clean with some hierarchy issues | Clear hierarchy, purposeful whitespace, polished |
| 10 | Content | Jargon-heavy, unclear | Mostly clear, some jargon | Scannable, user-centered, helpful |
Interpreting the score:
Follow these five phases to conduct a thorough, repeatable UX audit.
Define the boundaries of the audit before beginning any evaluation.
Walk through each user flow methodically, screen by screen.
references/audit-checklist.md. This checklist covers 10 categories: Navigation, Visual Design, Content/Messaging, Forms/Input, Performance, Accessibility, Mobile/Responsive, Error Handling, Loading/Feedback, and Conversion.Apply structured scoring to every finding.
Assemble findings into a structured, actionable report.
references/report-template.md as the starting structure.Translate findings into an actionable plan.
This skill includes three reference documents for detailed application:
references/audit-checklist.md — Comprehensive checklist organized into 10 categories (Navigation, Visual Design, Content/Messaging, Forms/Input, Performance, Accessibility, Mobile/Responsive, Error Handling, Loading/Feedback, Conversion). Use this as the primary evaluation instrument during Phase 2.
references/severity-scoring.md — Complete scoring systems including Nielsen's severity scale, the three-factor severity model, the Severity x Frequency prioritization matrix, RICE formula, Impact-Effort 2x2 matrix, MoSCoW method, and the full 10-dimension scoring rubric with detailed descriptions for scores of 1, 3, and 5.
references/report-template.md — Ready-to-fill audit report template with sections for Executive Summary, Methodology, Scores by Category, Issues by Severity, Nielsen Heuristics Evaluation, Prioritized Recommendations, and Action Plan.
During the audit, watch for these frequently encountered anti-patterns:
Flag any dark pattern as a severity 4 (Catastrophic) finding. Dark patterns erode trust and may violate regulations (FTC guidelines, EU Digital Services Act).
A heuristic audit identifies likely problems based on principles. Recommend supplementing with usability testing when:
A combined approach — heuristic audit for breadth, usability testing for depth — produces the most reliable results.