Design critique methodology and real-world case studies. Structured critique frameworks (Liz Lerman, 30/60/90), product deep-dives (Stripe, Linear, Notion, Airbnb, Figma, Arc), redesign failure analysis, and actionable feedback protocols.
From ux-ui-masterynpx claudepluginhub phazurlabs/ux-ui-mastery --plugin ux-ui-masteryThis skill uses the workspace's default tool permissions.
references/critique-methodology.mdreferences/product-deep-dives.mdreferences/redesign-failure-analysis.mdDesigns and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Design critique is the single most cost-effective quality intervention in the product development lifecycle. Research from IBM Systems Sciences Institute and subsequent industry analyses consistently demonstrate that catching a design flaw during the critique phase costs roughly one-tenth of what it costs to fix the same flaw after launch. The ratio grows more extreme with scale: a navigation architecture mistake caught in wireframes costs a team a few hours of discussion and iteration; the same mistake discovered after a production release with millions of active users can require months of re-engineering, user re-education, and brand trust recovery.
Beyond cost avoidance, critique serves three compounding functions. First, it raises the quality floor across an entire design organization. When designers regularly expose their work to structured feedback, the weakest output improves faster than any training program could achieve. Second, it builds shared design vocabulary. Teams that critique together develop a common language for evaluating hierarchy, affordance, information density, and emotional tone — making future collaboration dramatically more efficient. Third, it distributes design knowledge. Junior designers absorb senior judgment through critique participation, and senior designers stay honest through exposure to fresh perspectives.
The inverse is equally instructive. Organizations that skip or perform superficial critique consistently ship products that require expensive post-launch patches, generate higher support ticket volumes, and suffer the slow erosion of user trust that comes from shipping half-considered experiences.
The formal studio critique gathers 4-8 participants in a scheduled session, typically 45-60 minutes, with defined roles: presenter, facilitator, critics, and note-taker. The presenter shares work at a specific fidelity level and frames the feedback they need. The facilitator manages time, enforces rules, and ensures all voices are heard. Critics provide structured feedback. The note-taker captures decisions, open questions, and action items. Studio critique works best for major design milestones — concept exploration, mid-fidelity flow review, and pre-handoff polish.
Desk critique is the spontaneous, low-ceremony version: a designer turns to a colleague and says, "Can you look at this for two minutes?" The value of desk critique lies in its frequency and low activation energy. It catches small issues — alignment inconsistencies, confusing label text, missing states — before they compound. The risk is that without structure, desk critique can devolve into vague affirmation ("looks good") or ungrounded opinion ("I'd make that blue"). Even in informal settings, the critic should anchor feedback to a principle, heuristic, or user scenario.
Distributed teams often cannot gather synchronously. Async critique uses Figma comment threads, annotated screenshots, or recorded Loom walkthroughs to deliver structured feedback across time zones. The presenter records a 3-5 minute walkthrough explaining context, decisions, and open questions. Critics respond with timestamped or pin-pointed comments. Async critique requires more discipline than synchronous formats because the feedback loop is slower and misunderstandings are harder to resolve in real time. Best practice: require critics to label each comment with a type tag — [Question], [Concern], [Suggestion], [Praise] — to prevent ambiguous annotations.
Before exposing work to others, designers should run a structured self-critique against a checklist. This is not about catching every issue — it is about eliminating the obvious ones so that group critique time is spent on genuinely hard problems. A strong self-critique checklist covers: Does this design handle empty states, loading states, and error states? Is the visual hierarchy clear within 3 seconds? Does the primary action have the strongest affordance on the screen? Are labels written in user language rather than system language? Does this meet WCAG AA contrast and target size requirements? Self-critique builds the habit of evaluation rigor that transfers to every other critique format.
Liz Lerman developed the Critical Response Process in the performing arts, and it has been widely adopted in design because it solves the core problem of critique: how to deliver honest, useful feedback without triggering defensiveness that closes the presenter's mind. The process has four steps, executed in strict order.
Step 1: Statements of Meaning. Critics begin by articulating what was meaningful, interesting, surprising, or effective about the work. This is not empty praise — it is specific identification of what is working and why. "The progressive disclosure of the pricing tiers respects the user's decision-making process" is a statement of meaning. "Looks nice" is not. This step ensures the presenter knows which elements to protect as they iterate.
Step 2: Artist's Questions. The presenter (the "artist") asks questions about aspects they are uncertain about. "I'm not sure whether the secondary navigation is discoverable enough — what was your experience finding it?" This step gives the presenter control over the direction of feedback, ensuring they get input on their actual concerns rather than whatever the critics feel like discussing.
Step 3: Neutral Questions. Critics ask non-leading questions to understand the presenter's intent. "What was your rationale for placing the filters above the results rather than in a sidebar?" is neutral. "Don't you think the filters would work better in a sidebar?" is not — it embeds an opinion. Neutral questions surface the reasoning behind decisions, which often reveals whether a choice was deliberate or accidental.
Step 4: Opinion Time. Only after the first three steps do critics offer direct opinions, and only with the presenter's permission. The facilitator asks: "Sarah has an opinion about the onboarding flow — would you like to hear it?" The presenter can accept or defer. When opinions are offered, they must be grounded: "I believe the three-step onboarding will cause drop-off because each step requires a decision that new users aren't equipped to make yet — research from the ux-research-methods reference shows progressive onboarding outperforms upfront configuration by 40%."
The design is rough — sketches, rough wireframes, concept maps. Critique at this stage should focus exclusively on strategy and structure. Is this solving the right problem? Is the information architecture logical? Are the core user flows sound? Does the mental model match user expectations? Feedback about color, typography, or pixel alignment at this stage is not just premature — it is actively harmful because it pulls attention away from foundational decisions that are expensive to change later.
The design has defined layout, component selection, content hierarchy, and interaction patterns, but visual polish is incomplete. Critique should focus on interaction design, content strategy, and pattern consistency. Do the interaction patterns follow platform conventions? Is the content hierarchy clear? Are edge cases handled — empty states, error states, overflow, truncation? Does the flow feel efficient or are there unnecessary steps? This is also the right time to evaluate against Nielsen's heuristics systematically.
The design is near-final. Critique should focus on visual polish, micro-interactions, accessibility compliance, responsive behavior, and implementation feasibility. Are the spacing and alignment consistent with the design system tokens? Do the animations serve a functional purpose or are they decorative? Does the design meet WCAG AA standards? Will engineering encounter ambiguity during implementation? Feedback at this stage should be specific and scoped — this is not the time to question foundational architecture.
When every stakeholder's feedback is treated as equally valid and equally mandatory, the result is a design that satisfies no one. The solution is to identify a single decision-maker before the critique begins. Everyone's feedback is heard; one person decides which feedback to act on.
When the most senior person in the room speaks first or speaks with implied authority, other participants self-censor. The solution is to have the most senior person speak last, or to use anonymous first-round feedback (written sticky notes or digital equivalent) before open discussion.
"Make it pop," "it feels off," "I don't love it" — these are emotional reactions masquerading as design feedback. They are not actionable. The solution is to require every critique comment to reference a specific heuristic, design principle, or user scenario. "The call-to-action doesn't have sufficient visual weight relative to the surrounding elements, which violates hierarchy principles" is actionable. "Make it pop" is not.
"I prefer left-aligned navigation" is a personal preference. "Left-aligned navigation tests 15% faster for discovery in information-dense dashboards according to Baymard Institute research" is evidence. Critique must distinguish between these. When evidence is unavailable, the appropriate response is to flag the question for user testing rather than defaulting to the loudest opinion.
Spending 30 minutes debating button border-radius while ignoring a fundamental flow problem. The solution is time-boxing: allocate critique time proportional to the impact of the decision. The facilitator's primary job is preventing bike-shedding by redirecting discussion to higher-impact topics.
When asked to conduct a design critique or analyze a product, use the reference materials in this skill as follows:
references/critique-methodology.md for facilitator guides, session formats, feedback frameworks, and scoring rubrics.references/product-deep-dives.md for detailed case studies of world-class products including Stripe, Linear, Notion, Airbnb, Figma, Arc Browser, Duolingo, Vercel, Apple Music vs. Spotify, and Slack.references/redesign-failure-analysis.md for detailed post-mortems of major product redesign failures including Snapchat, Windows 8, Digg, Twitter/X, Google Plus, Sonos, Healthcare.gov, Reddit API changes, YouTube dislikes removal, and Skype.Always ground critique feedback in specific principles, heuristics, or evidence. Never offer unanchored opinions. When principles conflict, name the tradeoff explicitly and recommend how to resolve it — through user testing, business priority alignment, or design principle hierarchy.