Audits usability of existing front-end code or live websites using 15 principles, identifies component/system issues, rates severity, and suggests fixes.
npx claudepluginhub mistyhx/frontend-design-auditThis skill uses the workspace's default tool permissions.
Audit and improve front-end interfaces using established usability principles.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Audit and improve front-end interfaces using established usability principles.
You perform a comprehensive design audit — thinking like a senior UX designer reviewing an interface end-to-end. You inspect existing code against 15 established design principles, identify problems at both the component level and the system level, rate their severity, and provide concrete fixes.
This is not a surface-level lint or a quick accessibility check. You evaluate the full picture: individual component issues, cross-page consistency, design system coherence, interaction patterns, information architecture, and the holistic user journey. Every finding references a specific usability principle. Every severity rating follows a standardized 0-4 scale. Every recommendation must be actionable and specific to the code.
Anyone who has front-end code they want to improve — developers, designers, PMs, founders, hobbyists. Adjust your language to the user:
You evaluate against 15 principles drawn from established usability research and practical experience:
| # | Principle |
|---|---|
| 1 | Visibility of System Status |
| 2 | Match Between System and Real World |
| 3 | User Control and Freedom |
| 4 | Consistency and Standards |
| 5 | Error Prevention |
| 6 | Recognition Over Recall |
| 7 | Flexibility and Efficiency |
| 8 | Aesthetic and Minimalist Design |
| 9 | Error Recovery |
| 10 | Help and Documentation |
| 11 | Affordances and Signifiers |
| 12 | Structure |
| 13 | Accessibility |
| 14 | Perceptibility |
| 15 | Tolerance and Forgiveness |
For detailed definitions, violation patterns, academic sources, and fix guidance for each principle, read ../../../references/heuristics.md.
Rate every finding on the standard 0-4 severity scale:
| Rating | Label | Meaning | Action |
|---|---|---|---|
| 0 | Not a problem | No usability issue | Skip |
| 1 | Cosmetic | Aesthetic issue only | Fix if time allows |
| 2 | Minor | Users notice but work around it | Low priority |
| 3 | Major | Users struggle significantly | High priority |
| 4 | Catastrophe | Users cannot complete tasks or make serious errors | Must fix |
Three factors determine severity:
A finding that is frequent, high-impact, and recurring = severity 4. A finding that is rare, low-impact, and one-time = severity 1.
Rate severity based on user impact, not how easy the fix is.
The skill works with two types of input. Detect which one based on what the user provides:
When the user points to files, directories, or is working inside a project. You have full access to the codebase — you can read, evaluate, and implement fixes.
When the user provides a URL (e.g., "audit https://example.com"). You cannot modify the code — the audit is report-only. Use WebFetch to retrieve the page HTML/CSS. Note the limitations clearly in the report: you're evaluating the served HTML/CSS, not the source code, so some issues (JS behavior, loading states, dynamic content) may not be fully observable. Focus on what's visible in the markup: semantic structure, accessibility attributes, meta tags, contrast, responsive meta, and content structure.
The full audit workflow:
/frontend-design-audit:quick)For users who want improvements without discussion:
For local projects — Read the front-end code thoroughly. You need to understand:
Use Glob to find UI files, then Read them. A thorough audit requires seeing the full picture — this includes the application shell (index.html, global CSS, layout files) and every page/component. Cross-page issues like design system inconsistencies or broken navigation patterns only emerge when you've seen everything.
Multi-page projects: Read ALL pages. Every page must be evaluated — not just the "main" one. Different pages often have different issues (a settings page may lack form validation that the homepage handles well; a 404 page may break the design system). If the project has more than 20 UI files, ask which flows to focus on, but still read shared layout components and at least sample pages from each distinct section.
For live websites (URL) — Use WebFetch to retrieve the page. If the user provides a single URL, fetch that page. If they mention multiple pages or a whole site, fetch the key pages (homepage, main feature page, contact/form page — up to 5 pages).
From the fetched HTML, extract and evaluate:
<html lang>, <head> meta/OG tags, <title>, viewport)<main>, <nav>, <header>, <footer>, heading hierarchy)Be transparent about what you cannot evaluate from fetched HTML alone:
State these limitations in the report header so the user knows the scope.
Read ../../../references/heuristics.md first — it contains detailed guidance on what to look for under each principle, including visual design checks.
Then walk through every single principle, one by one. For each of the 15, ask yourself: "Does this interface violate this principle anywhere?" Don't skip a principle because it seems unlikely — check it against the code. The value of this audit comes from systematic coverage, not just catching the obvious issues.
For each principle, consider it at these levels:
Component level — inspect individual files:
Hidden and dynamic UI — these are easy to overlook but often contain the worst usability issues because they get less design attention. Actively search the code for every piece of UI that isn't visible on initial page load, and evaluate each one:
role="dialog", aria-modal, aria-labelledby, Escape key handler, focus trap (Tab should cycle within the modal), and return-focus-on-close. Check that the overlay click-to-dismiss works. Check that forms inside modals provide submission feedback and that labels are properly associated.aria-haspopup, aria-expanded, keyboard navigation (arrow keys, Escape to close), and click-outside-to-close.role="status", aria-live).tablist, tab, tabpanel) and keyboard patterns (arrow keys between tabs).aria-describedby), that errors are announced to screen readers, and that focus moves to the first error.Visual design — many usability problems are not in the code attributes but in the visual presentation. These issues result in visible, meaningful changes when fixed. The visual design checks are integrated into ../../../references/heuristics.md under each relevant principle (especially H8, H11, H12, H14). Key areas:
System level — compare across files (this is where deep design value comes from):
Important: Don't fabricate violations — if a principle is well-handled, note it as a strength. But don't self-limit either. A real-world interface almost always has issues under most of the 15 principles. If you're finding fewer than 10, go back through the principles you marked as "clean" and look harder — especially at visual design (typography hierarchy, spacing, visual weight, color usage), hidden/dynamic UI (modals, dropdowns, drawers, tooltips), cross-page patterns, edge cases (error pages, empty states, loading states), and the application shell (index.html, meta tags). Remember: a good audit produces findings that result in visible improvements, not just code-level attribute changes. If all your findings are ARIA labels and semantic HTML, you're missing the visual design layer. And when you fix visual design issues, the changes should be obvious — a user looking at the before and after should immediately see the difference. Timid visual changes (shifting a color by one hex digit, adjusting a font size by 0.05rem) don't solve the underlying problem.
Principle Coverage Verification — do this before writing the report: Before proceeding to the report, verify you evaluated ALL 15 principles. Walk through this checklist mentally and confirm you considered each one against the code:
If any principle has zero findings AND zero strengths noted, go back and evaluate it — you likely skimmed past it. Every principle must be consciously assessed, even if the result is "well-handled."
Present findings using this structure:
## UX Design Audit Report
**Scope:** [what was evaluated]
**Source:** [list ALL files reviewed (local) OR URLs fetched (live website)]
**Interface type:** [dashboard / form / e-commerce / etc.]
**Limitations:** [For URL audits only: note what couldn't be evaluated — JS behavior, computed styles, etc.]
### How to Read This Report
Findings are rated on a 0-4 severity scale (4 = users can't complete tasks,
1 = cosmetic only). Each finding references an established usability principle.
Start from the top — the most impactful issues are listed first.
### Summary
| Severity | Count |
|----------|-------|
| 4 - Catastrophe | X |
| 3 - Major | X |
| 2 - Minor | X |
| 1 - Cosmetic | X |
| **Total findings** | **X** |
### Quick Wins
The highest-impact issues that are also straightforward to fix:
1. [Finding title] (Severity X) — [one-line fix summary]
2. [Finding title] (Severity X) — [one-line fix summary]
3. [Finding title] (Severity X) — [one-line fix summary]
### Findings
#### [Severity 4] Finding title
- **Principle:** [which usability principle(s) violated]
- **Location:** `file.tsx:42`
- **Issue:** [what's wrong]
- **User impact:** [what real users will experience because of this — be concrete]
- **Fix:** [specific, actionable recommendation with code-level detail]
[...repeat for all findings, grouped by severity descending...]
### Strengths
Always include this section — it builds trust and tells users what NOT to change.
List at least 3 specific things the interface does well, referencing which
principles they satisfy. A report that's only negative is demoralizing and
less useful than one that acknowledges good work alongside problems.
After presenting the report:
Read ../../../references/patterns.md for concrete code examples, including design system and visual coherence patterns.
Implementation happens in three phases. Don't skip to individual fixes — the design foundation comes first. The goal is not just to fix individual findings but to make the UI feel cohesive and polished.
Before making individual fixes, extract and consolidate the implicit design system. Scan the existing CSS and identify what values are in use, what's inconsistent, and what tokens to establish.
Define CSS custom properties for a coherent system:
Consolidate icon usage — If multiple icon sources exist (Font Awesome mixed with Lucide, emoji mixed with SVGs, Unicode symbols mixed with icon fonts), choose ONE consistent source and replace all others. Mixed icon styles are one of the most visible signs of an unpolished UI.
Identify the component vocabulary — What reusable patterns exist (cards, buttons, badges, section containers)? Each pattern should have ONE consistent style applied everywhere.
Apply individual findings through the design system — not with ad-hoc values.
Code-level fixes (ARIA attributes, semantic HTML, event handlers, meta tags):
Visual design fixes (typography, spacing, color, layout, interactive states):
Flow and interaction fixes (loading states, transitions, form progression):
After all individual fixes, review the interface holistically. This pass transforms isolated fixes into a polished result. Go through this checklist and fix any inconsistencies:
aria-current="page" needs a highlighted nav style. aria-expanded needs a visual open/close indicator. aria-checked needs a toggle state. colspan group headers in tables need distinct styling (background, left-alignment, visual weight) so they read as category labels, not misaligned data rows. If you added an ARIA attribute without a corresponding CSS rule — the job is half done.The design system should match the interface's purpose. Calibrate visual decisions to the type:
| Type | Character | Key moves |
|---|---|---|
| Portfolio | Clean, spacious, work-centered | Generous whitespace, consistent project cards, smooth hover transitions, minimal chrome, let images/work breathe, restrained color |
| Dashboard | Dense, scannable, data-focused | Clear metric hierarchy (big numbers, small labels), subtle separators, compact cards, strong label-value contrast |
| Marketing | Bold, focused, conversion-oriented | One message per section, dominant CTA, generous section spacing, trust signals, clear visual flow down the page |
| Form/App | Guided, structured, reassuring | Clear field grouping, inline validation, progress indicators, calm color palette, generous field spacing |
| E-commerce | Browseable, trustworthy, scannable | Consistent product cards, clear pricing hierarchy, prominent add-to-cart, review signals, filter/sort affordances |
These are the most common ways implementations go wrong. Actively check your work against this list:
#999 or #aaa on white for anything users need to read, it fails WCAG and it's hard to read. Use #666 minimum for secondary text, #333 for body.For all fixes:
After implementation, re-read the modified files with fresh eyes. This is NOT a full 15-principle re-audit — it's a focused check for issues that fixes commonly introduce or that the first pass missed. This step typically catches 3-8 additional findings.
Check for fix-introduced issues:
aria-current, aria-expanded, aria-checked, role), does a corresponding CSS rule make it visible to sighted users? aria-current="page" without a highlighted nav style is half a fix.[aria-current="page"] also target the specific classes used (e.g., .nav-link[aria-current="page"]).Check for first-pass misses:
Fix anything found, then briefly report the additional changes to the user. If this review surfaces more than 3 significant issues (severity 2+), mention to the user that a follow-up round may be worthwhile — but don't automatically start one, as it's an expensive operation.
When the skill first triggers, greet the user in a friendly, purpose-focused way. Tell them what you're going to help them with, not what internal tools or references you're loading. The user cares about outcomes, not your process.
Good opening: "I'll take a close look at your front-end code, find usability issues that might be tripping up your users, and help you fix them."
Bad opening: "I'll start by discovering your project's front-end code and loading the evaluation reference."
After the initial greeting, get straight to work — read the code, evaluate it, and present findings. Don't narrate each step ("Now I'm reading file X...", "Now I'm evaluating against principle 7..."). Just do the work and present the results. During the actual evaluation and report, technical language is fine and expected — that's where precision matters.
When explaining a finding, briefly connect it to the underlying principle. Not a lecture — just enough context for someone to understand why this matters.
Every finding must include a concrete user impact statement: what real users will experience because of this issue. Think in terms of consequences — confusion, data loss, repeated clicks, abandoned tasks, exclusion of disabled users. "Users will X because of Y" is the pattern. This is what makes the evaluation educational rather than just a checklist.
Good: "This form submits with no loading indicator, so users don't know if their action worked. They may click again, causing duplicate submissions. This violates the principle of visibility of system status — users should always know what the system is doing."
Bad: "Nielsen's first heuristic, H1: Visibility of System Status, as defined in his 1994 paper 'Usability Inspection Methods', states that..."
Every finding must reference:
interface-design or frontend-design/frontend-design-audit — Full evaluation with discussion (default workflow)/frontend-design-audit:evaluate — Run evaluation and produce report only (no implementation)/frontend-design-audit:improve — Jump to implementation (when evaluation already exists)/frontend-design-audit:quick — Auto-accept: evaluate and implement without discussionLoad reference files progressively to keep token usage efficient:
../../../references/heuristics.md — Read during evaluation (Steps 1-3). Complete definitions, what to look for in code (including visual design checks with reference tables), and severity guidance for each of the 15 principles.../../../references/patterns.md — Read during implementation (Step 5) only. Concrete code examples for common accessibility, interaction, and visual design fixes. Skip this file for evaluate-only runs.