Applies Perception-First Design (PFD) framework to evaluate, derive, and analyze UIs, landing pages, emails, ads, marketing copy, and communications via human perception psychology.
npx claudepluginhub skovalik/perception-first-design --plugin perception-first-designThis skill uses the workspace's default tool permissions.
**Author:** Stefan Kovalik / Aurochs
references/accumulated-learnings.mdreferences/citation-standards.mdreferences/insights-log.mdreferences/learnings/L0/l018-backend-mechanics-as-frontend-complexity.mdreferences/learnings/L1/l011-visual-channel-audit.mdreferences/learnings/L2/l013-keyboard-density-is-l2.mdreferences/learnings/L2/l016-near-miss-color-asymmetry.mdreferences/learnings/L2/l024-aa-constrained-token-ladder.mdreferences/learnings/L3/l023-falsifiability-triad.mdreferences/learnings/L4/l003-pre-send-vs-post-response.mdreferences/learnings/L4/l006-infrastructure-vs-activation.mdreferences/learnings/L4/l008-epistemic-asymmetry.mdreferences/learnings/L4/l022-l4-symmetry-threshold.mdreferences/learnings/_index.mdreferences/learnings/_search.jsonreferences/learnings/cross/l009-temporal-session-continuity.mdreferences/learnings/cross/l012-route-vs-survey-knowledge.mdreferences/learnings/cross/l015-experiential-self-contradiction.mdreferences/learnings/cross/l019-multi-artifact-engagement-field.mdreferences/learnings/cross/l021-l4-ethics-fusion.mdSets project context for UX and design strategy, routes to specialized skills, loads foundational UX knowledge. For UX/product design, evaluation, strategy, ethics, dark patterns.
Provides structured, actionable feedback on UI designs, Figma files, wireframes, and user flows using JTBD, Gestalt principles, and Nielsen heuristics with prioritized issues.
Applies UX principles, Nielsen's heuristics, and frameworks to review interface usability, plan user flows, evaluate designs for web, mobile, CLI, AI products.
Share bugs, ideas, or general feedback.
Author: Stefan Kovalik / Aurochs
Framework version: 3.6 (2026-04-20). ~100 citations, international research integration, Methodology Siblings section, Cultural Calibration meta-rule, 7 new layer rules across L0–L4
Source of truth: framework/PERCEPTION-FIRST-DESIGN.md — when framework layer descriptions change, update this skill's layer summaries to match. Framework is canonical; this skill is derived.
License: CC BY-SA 4.0. PFD™ is a trademark of Stefan Kovalik (USPTO serial 99686343). Practice exemption: applying PFD in your work doesn't trigger share-alike. See LICENSE and NOTICE in the plugin root.
Path conventions: All paths in this document are plugin-root-relative. Skill-local paths (references/...) resolve from this skill's directory; repo-level paths (framework/..., corpus/...) resolve from the plugin root.
A psychology-backed design framework grounded in how human perception actually works. Not "make it pretty" — design for how people perceive, process, and decide.
The thesis: Users don't think — until you make them. Attention is OFF by default. The design question isn't "don't waste their attention" — it's "when you activate their attention, what do they think about, how do they feel, and what do they do next?"
Applies to anything that communicates with humans: interfaces, landing pages, emails, ads, copy, presentations, proposals, pitches, product descriptions, documentation. The five layers exist to produce pre-verbal arousal — the viewer's nervous system firing before their analytical mind engages.
Mode 1: PFD Evaluation (Checklist) Walk an existing design through the 5 layers. Each layer validates or flags. Good for quick audits.
Mode 2: PFD Derivation (Generative) Work bottom-up through the 5 layers. Each layer produces a hard REQUIREMENT. The solution emerges from the accumulated constraints — not from intuition or domain convention.
Mode 3: PFD Analysis (Descriptive) Walk the 5 layers as predictive lenses, not as constraints. Output cascading consequences, trade-offs, and integrative compounds. Good for hypothetical change questions ("what happens if X"), mechanism questions ("why does X work"), and behavioral observations ("users say X but do Y, why"). Produces results, not recommendations. The cognitive contract is descriptive, not prescriptive.
When the skill activates without an explicit command (e.g., "pfd this", "run PFD on this"), inspect the input and pick the mode:
Mode 1 — Evaluation, when the input IS an artifact:
Mode 2 — Derivation, when the input IS a problem:
Mode 3 — Analysis, when the input IS a hypothetical, mechanism, or observation:
Ambiguous? Ask once:
"Three options: evaluate something existing (URL/screenshot), solve a design problem (goal/question), or analyze a mechanism or hypothetical (what-happens-if / why). Which?"
Edge cases:
Forced via command:
/perception-first-design:solve — always Mode 2 (derive a solution)/perception-first-design:evaluate — always Mode 1 (audit an artifact)/perception-first-design:analyze — always Mode 3 (descriptive cascade)/perception-first-design:all — composite, run all three modes on the same inputFix bottom-up. Upstream failures block everything downstream.
"Can't perceive anything without the bandwidth to do so."
In practice: Reduce choices. Progressive disclosure. Smart defaults. Delete unnecessary fields. Clean background noise that users never consciously notice (inconsistent spacing, off-system colors). Their WM pays the bill regardless. Failure signals: Form abandonment, high bounce on option-heavy pages, "where do I click?"
"Your site has 50 milliseconds."
In practice: Hero/opening = thesis statement. Visual quality must match price point. Audit for uncanny-valley triggers (off AI photos, uncanny animation timing, asymmetric faces). Failure signals: High bounce + low time-on-page. "Looks sketchy." Competitors with worse products winning.
"If it's easy to process, it feels true."
In practice: 2 fonts max. 3-4 colors. Consistent spacing. Consistent voice and tone. Design for felt fluency, not metric-passing. Failure signals: Inconsistent branding across touchpoints. Visual/verbal quality ≠ price point.
"It's not for you. It's for them."
In practice: Design for what analytics shows, not what surveys say. Test emotional response, not stated preference. Fix visual coherence before rewriting copy when trust data is poor. Failure signals: "Users say they want X but do Y." Internal team preferences ≠ user behavior.
"Build the trail."
"Fewer options" is context-dependent. Reducing choices reduces paralysis at decision points — purchase pages, one-time commitments, unfamiliar options. It does NOT apply in expert tool palettes, browse/reference interfaces, or workspace contexts where option reduction harms productivity. The principle is: simplify decision points, not the entire environment.
In practice: CTAs as natural resolution of the experience. Progressive trust-building. No dark patterns. Match option density to context: sparse at conversion points, rich in expert workspaces. For risk/price/uncertainty: demonstrate via interaction, not description. Failure signals: Users complete tasks but feel manipulated. High conversion but low retention. Experts frustrated by hidden/reduced tooling.
Rule Zero: Do NOT propose any solution until all 5 layers are analyzed and requirements accumulated.
What is the user trying to do? What prevents them? What do they experience? Do NOT describe the problem as missing features or comparisons to other tools.
For each layer (L0 → L1 → L2 → L3 → L4):
a. State the constraint (biological/psychological hard limit)
b. State the violation (how current design fails this constraint)
c. Derive the REQUIREMENT (what design MUST do — not "should")
d. Label it R[n]
e. Include a **Citations:** line listing 1-2 strongest supporting works from the framework's research base. Format: Author (Year) for [reason] with semicolons between multiple.
List R1-R5. All non-negotiable. Solution must satisfy ALL simultaneously. Lower-layer requirements win conflicts.
What satisfies R1 AND R2 AND R3 AND R4 AND R5? If no single intervention covers all five, what minimal set does?
Check existing proposals against full requirement set. Note failures.
Requirements that no proposal satisfies = where the non-obvious solution lives.
End the output with a single italic line:
Initial findings. Ralph Loop a requirement, cite further, switch to analyze / evaluate, or ask any follow-up to dig deeper.
Non-negotiable. Single italic line, no bullet menu.
Rule Zero: Walk the 5 layers as descriptive lenses, not as constraints. Produce results, not recommendations. Do not slip into solve mode.
State the change, mechanism, or counterfactual being analyzed. Ground it in concrete terms. Avoid restating it as a problem to solve.
For each layer (Cognitive Load → First Impression → Processing Fluency → Perception Bias → Decision Architecture): a. State the layer's psychological reality (the constraint) b. Trace what the change does to perception or cognition at this layer c. Stress-test the consequence against four dimensions (each consequence must surface findings from at least two of these, ideally all four): - User-population variation (novice, power, accessibility, device, demographics) - Adjacent infrastructure (extensions, APIs, automation, dependent products, integrations, regulated surfaces) - Precedents (has a similar change been attempted elsewhere? what was the outcome?) - Time structure (immediate / short / long: seconds-minutes, days-weeks, months-years) d. Output: mechanism prose plus a "What happens:" prediction line that incorporates the stress-test findings.
Produce one consequence per layer. Five layer-cascade consequences are non-negotiable.
Does the change push the layer in only one direction, or in both? When a layer pushes both directions, fold the trade-off into the consequence as a sub-finding (not as a separate consequence).
Example: phishing risk in a "remove URL bar" analysis is bidirectional. Volume drops; per-incident severity rises. Net depends on which dominates. Do not assert one-way effects when trade-offs exist.
What happens when consequences compound across layers? Three patterns to check by default:
Other compounds may emerge through the cascade. Look for them. Tag with [Cross-layer + Social Aggregation], [Cross-layer], or other appropriate cross-cutting label. Letter-label them (A, B, C) to distinguish from the numbered layer cascade.
Each consequence (numbered or lettered) has four structural elements:
## N. [Title] [Layer] or ## N. [Title] [Layer A × Layer B])**Citations:** Author (Year) for [reason]; Author (Year) for [reason]. 1-3 strongest supporting works from the framework's research base. When applied prediction without direct citation: "Applied prediction; no direct framework citation. Closest adjacent: [related citation]."Subtitle and Citations are both non-negotiable.
Layer cascade: 5 numbered consequences in cascade order, four-element structure above.
Integrative compounds: separate section, letter-labeled (A, B, C, ...), [Cross-layer ...] heading tag, same four-element structure.
Closing cue: End the output with a single italic line:
Initial findings. Ralph Loop a consequence, cite further, switch to solve / evaluate, or ask any follow-up to dig deeper.
Single line, italic, on its own. Non-negotiable. No menu expansion.
| Anti-Pattern | Why It Fails |
|---|---|
| Solution-first ("we need a sidebar") | Contaminates requirement space |
| Competitive copying ("Miro does X") | Domain-driven, not perception-driven |
| Engineering-first ("what's fastest?") | Optimizes effort, not perception |
| Checklist mode (evaluating post-hoc) | Refines but doesn't generate |
| Skipping layers | Dependency stack violation |
| Soft requirements ("should" not "must") | Lets solutions slide past gaps |
| Slipping into solve mode during analysis | Produces recommendations when the question wanted results; descriptive contract violated |
| One-way effects when trade-offs exist | Misses bidirectional layer behavior; phishing-as-uniform-rise example |
| Treating layers as siloed in analysis | Misses integrative compounds (backlash, lock-in asymmetry, ecosystem cascade) |
| Shallow per-consequence depth in analysis | Single sentences read correct but bland; the depth bar is multi-paragraph stress-tested findings |
| Heading without italic concrete subtitle | Abstract titles are not scannable; readers without framework context need the plain-language line to follow the analysis |
| Missing Citations line per consequence or requirement | Output reads as opinion when it should anchor to evidence; framework's research base is its credibility |
| Closing cue expanded into a bullet menu | Loads WM at the most depleted moment of the output; invites survey-driven format creep; closing cue is one italic line, never a list |
Append an entry to references/insights-log.md after every PFD derivation or evaluation:
### YYYY-MM-DD — [Brief description of what was analyzed]
**Type:** url | text | image | html | css | copy | directory
**Domain:** [e.g., SaaS landing, ecommerce PDP, email, portfolio, dashboard]
**Key finding:** [The non-obvious thing PFD surfaced — one sentence]
**Layer(s):** [L0/L1/L2/L3/L4]
**Promote?:** yes | maybe | no
**Notes:** [Cross-references, patterns, connections to prior findings]
Do this even for routine analyses. The insight is sometimes in what PFD struggled with, what layer was ambiguous, or what the protocol couldn't handle cleanly.
Learnings are stored as atom files under references/learnings/, organized by primary layer. Lightweight index + on-demand atom reads, never eager monolith load.
Query sequence:
references/learnings/_index.md — lightweight, one line per learning (scales to 1000+)Read — only the ones you needreferences/learnings/_search.json instead of the markdown indexCheapest first: index scan → targeted atom read → legacy monolith fallback.
Do NOT eager-load accumulated-learnings.md — that monolith is the pre-migration fallback for L1–L19 only. Read it when the index points at pre-migration content, or as a last-resort fallback for broad context. After Phase 2 bulk migration, it retires entirely.
Adding a new learning (when Promote?: yes in insights-log):
references/learnings/{layer}/l<NNN>-<slug>.md with YAML frontmatter (required: id, title, layer, date, contributor, source; optional: secondary_layers, updated, related, tags)python scripts/gen-pfd-index.py (run from repo root)_index.md + _search.json together.claude-plugin/plugin.jsonLoad these when relevant:
references/learnings/ — Sharded accumulated learnings (29 atoms across L0/L1/L2/L3/L4/cross/meta). Query via _index.md (human) or _search.json (machine). Lazy-loaded.references/accumulated-learnings.md — Pre-shard monolith retained for backward compatibility. Do not eager-load; use the sharded loader above.references/citation-standards.md — Required when presenting PFD publicly or to clients. Distinguishes hard science from practitioner synthesis.references/mvs-psychology-reference.md — Per-layer psychology corpus with primary research citations. Every PFD violation traces to a citation here.references/pfd-spatial-extension.md — Load when applying PFD to canvas tools, spatial interfaces, or layout systems (Figma, Miro, Notion, Cognograph).references/practitioner-corrections.md — Practitioner correction layer. Empty by default; populate with audit corrections to lift evaluation accuracy from LLM-baseline to practitioner-grade.Full framework: framework/PERCEPTION-FIRST-DESIGN.md (~870 lines, ~100 citations).
When performing a Mode 1 evaluation, load the corpus for deeper analysis. Mode 2 (solve) optionally loads design-system profiles and practitioner corrections for constraint-setting. Mode 3 (analyze) operates on natural-language input and does not require corpus loading.
For Mode 1 evaluation:
corpus/core/tier2-prompt-template.md for the evaluation protocolThe corpus provides:
corpus/heuristics/) — specific detection criteria with psychology citationscorpus/design-systems/) — framework-specific detection and fix prescriptionscorpus/worked-examples/) — calibration anchors showing what each score range looks like(Practitioner corrections are tracked in references/practitioner-corrections.md — single source of truth.)
When to use corpus vs. quick evaluation:
references/practitioner-corrections.md + design system profile for constraint-setting/perception-first-design:analyze (descriptive cascade for hypotheticals, mechanism questions, behavioral observations) and /perception-first-design:all (composite three-mode runner). Analysis Protocol formalized: 5 layer-cascade consequences with per-consequence stress-testing across four dimensions (user-population variation, adjacent infrastructure, precedents, time structure), plus integrative compounds (social aggregation, lock-in asymmetry, ecosystem cascade) and folded trade-offs. Mode detection extended: bare-skill activation now routes "what happens if X", "why is X working", and similar phrasing to Mode 3. SKILL.md "Two Modes" section renamed "Three Modes." Four anti-patterns added covering analyze-specific failure modes (slipping into solve mode, one-way effects when trade-offs exist, treating layers as siloed in analysis, shallow per-consequence depth). Spec calibrated against the URL-bar/Chrome thought experiment that produced an expert "wow" reaction in the v2.1 era; v0.6.0 solve compressed that question to a verdict, v0.7.0 analyze produces the descriptive cascade the question actually warranted..claude-plugin/plugin.json (repo root) with spec-canonical component layout (skills/, commands/, scripts/ at root — no nested skill/ subdirectory). Marketplace catalog at .claude-plugin/marketplace.json enables self-hosted distribution alongside official Anthropic submission. Commands renamed: pfd → solve (Mode 2 derivation), pfd-audit → evaluate (Mode 1 corpus-backed audit) under /perception-first-design: namespace. Mode detection rules added — bare skill activation auto-routes between solve and evaluate based on input shape. 9 new atoms added (l021-l029) from the calibration campaign; _index.md regenerated to 29 atoms. mvs-psychology-reference.md added to skill references for skill-side quick lookup. License simplified to single CC BY-SA 4.0 with practice exemption (was dual MIT + CC BY-SA). NOTICE file at repo root with trademark terms + permitted/prohibited use examples. .gitignore for OS noise + private working files. Path conventions clarified. Corrections live in references/practitioner-corrections.md (single source of truth).accumulated-learnings.md to atom files under references/learnings/ organized by primary layer (L0/L1/L2/L3/L4/meta/cross). Generated _index.md (human-readable) + _search.json (machine-readable) from atom YAML frontmatter via scripts/gen-pfd-index.py. SKILL.md loader updated for lazy-load — index-first scan, atoms read on demand. Scales to 1000+ learnings without per-activation cost blowup. Monolithic file retained as pointer for external-link compatibility. Framework v3.6 unchanged./pfd command added.