Analyzes hypothetical changes, mechanisms, or behavioral patterns with Perception-First Design. Walks 5 perceptual layers to predict cascading consequences and trade-offs.
npx claudepluginhub skovalik/perception-first-design --plugin perception-first-design# PFD Analyze: Descriptive Analysis
Run a full PFD analysis on a hypothetical change, mechanism question, or behavioral observation. Produces results, not recommendations.
## Input
$ARGUMENTS. A hypothetical ("what happens if X"), a mechanism question ("why does X work"), an observation to explain ("users say X but do Y"), or any descriptive design question.
## When to use this vs. solve / evaluate
- **analyze** produces predictions and explanations. Descriptive. "What would happen, why does it work, what is actually going on."
- **solve** produces requirements and a synthesized soluti.../devils-advocateChallenges assumptions, identifies weaknesses, and stress-tests ideas or plans with structured assessments, failure modes, and questions.
/thinkApplies structured thinking frameworks like 80/20, First Principles, or SWOT to analyze a problem or decision, producing key insight and recommendation.
/ultra-thinkPerforms deep multi-dimensional analysis of problems from technical, business, user, and system perspectives, generating 3-5 solutions with pros/cons, risks, and structured recommendations.
/second-orderApplies second-order thinking to an action or current context, tracing first-, second-, third-order effects, delayed consequences, and providing a revised assessment.
/analyzeAnalyzes project code across quality, security, performance, and architecture domains, producing severity-rated findings, actionable recommendations, metrics, and reports.
/analyzeAnalyzes repository code health via complexity metrics, git churn, and test coverage. Generates report with overview, critical issues, warnings, recommendations.
Share bugs, ideas, or general feedback.
Run a full PFD analysis on a hypothetical change, mechanism question, or behavioral observation. Produces results, not recommendations.
$ARGUMENTS. A hypothetical ("what happens if X"), a mechanism question ("why does X work"), an observation to explain ("users say X but do Y"), or any descriptive design question.
If the question is "what happens if we remove X" or "why is X working" or "users do Y, why" then it is analyze.
If no question is provided, ask: "What hypothetical, mechanism, or behavioral pattern should I analyze with PFD?"
Load the PFD skill: Read skills/pfd/SKILL.md.
Then execute the Analysis Protocol strictly. Do NOT compress to a single verdict. Do NOT slip into recommendations. Walk the layers as predictive lenses.
State the change, mechanism, or counterfactual being analyzed. Ground it in concrete terms. Avoid restating the question as a problem to solve.
For each layer (Cognitive Load, First Impression, Processing Fluency, Perception Bias, Decision Architecture):
a. State the layer's psychological reality (the constraint). b. Trace what the change does to perception or cognition at this layer. c. Stress-test the consequence against four dimensions. Each consequence must surface findings from at least two of these dimensions, ideally all four:
d. Output: mechanism prose plus a "What happens:" prediction line that incorporates the stress-test findings.
Layer headings format: ## N. [Short title] [Layer Name]. For cross-cutting consequences: ## N. [Short title] [Layer A × Layer B].
The five layer-cascade consequences are non-negotiable. Always produce one per layer, in cascade order.
For each consequence: does the change push the layer in only one direction, or in both? When a layer pushes both directions, fold the trade-off into the consequence as a sub-finding. NOT as a separate consequence.
Example: phishing risk in a "remove URL bar" analysis is bidirectional. Volume drops (attackers lose their spoofing surface). Per-incident severity rises (users have no surface to verify identity). Net depends on which dominates. The framework should not assert one-way effects when trade-offs exist.
What happens when consequences compound across layers? Three patterns to check by default. More patterns may emerge through the cascade.
Add integrative consequences in their own section after the 5 layer-cascade consequences. Tag with [Cross-layer + Social Aggregation], [Cross-layer], etc. Letter-label them (A, B, C) to distinguish from the numbered layer cascade.
Each consequence (numbered or lettered) MUST have four structural elements in this order:
## N. [Abstract analytical title] [Layer Name] for layer-cascade consequences (numbered 1-5). ## A. [Abstract analytical title] [Cross-layer ...] for integrative compounds (letter-labeled A, B, C). The title carries the analytical framing. Cross-cutting consequences use [Layer A × Layer B] notation.framework/PERCEPTION-FIRST-DESIGN.md). When a consequence is an applied prediction without direct framework citation, state that transparently: "Citations: Applied prediction; no direct framework citation. Closest adjacent: [related citation]."The italic subtitle is non-negotiable. The Citations line is non-negotiable. Subtitle gives plain-language scanability; citations anchor findings to evidence base. Three layers of meaning visible at a scan: heading (framing), italic line (plain-language summary), bracket tag (framework anchor).
Example shape (consequence 1 from a "remove URL bar from Chrome" analysis):
## 1. Working memory hits both axes simultaneously [Cognitive Load]
*Users have to hold "where am I" and "what tabs were open" in their heads instead of in the UI.*
The URL bar is a memory prosthesis. The user's working memory does not have to hold "where am I" because the bar holds it. Tabs are a parallel-task prosthesis: each open tab externalizes "this thing I was also working on." Remove both and both move into the head.
For light users this is invisible for weeks. For power users running 5+ context-switching tasks, working memory lands at the 4-5 chunk Cowan ceiling almost immediately.
**What happens:** Light users do not notice. Power users hit the ceiling within a session and start dropping context. Frustration starts at 30 minutes. Cohort retention drops from the power-user end first.
**Citations:** Cowan (2001, 2010) for working memory capacity ~3-5 chunks; Hassin et al. (2009) for WM cost of unconscious processing.
The italic line ("Users have to hold...") is the subtitle. The Citations line at the end anchors the finding. Reader can scan the subtitle and What happens line, treat the body as optional depth, and trust the framework anchor via citations.
Output structure:
Layer cascade: 5 numbered consequences in cascade order, each with the four-element structure above.
Integrative compounds: separate section after the layer cascade, letter-labeled (A, B, C, ...), each with the same four-element structure but using [Cross-layer ...] tags in the heading.
Closing cue: End the entire output with a single italic line on its own:
Initial findings. Ralph Loop a consequence, cite further, switch to solve / evaluate, or ask any follow-up to dig deeper.
The cue signals the output is not exhaustive and names four follow-up paths in passing. No bullet menu, no scaffolding, no "Say: [phrase]" examples. Single italic line. Non-negotiable.
Log the run per the SKILL.md Insight Log protocol. Mark layer involvement, capture non-obvious findings (especially integrative compounds and bidirectional trade-offs), note Promote? yes if the analysis surfaces a candidate learning that generalizes beyond this specific question.