Use when identifying where user mental models diverge from system reality, where moments of truth occur, and where information scent misleads — finds the structural causes of confusion
From service-designnpx claudepluginhub zemptime/zemptime-marketplace --plugin service-designThis skill uses the workspace's default tool permissions.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Core principle: Every gap between what users believe is happening and what's actually happening is a design failure waiting to surface. Map the gaps, find the structural causes.
For each touchpoint or step, document:
| Column | What it captures |
|---|---|
| User belief | What users predict will happen, based on cues and prior experience |
| System reality | What actually happens — read routes, controllers, views, mailers |
| Consequence | Confusion, error, churn, support load — the cost of the gap |
| What would close it | The design change that aligns model to reality |
Code analysis reveals system reality directly. User beliefs require inference from labels, copy, affordances, and known interaction patterns. Separate confirmed from inferred.
Every crossing of the line of interaction is a moment of truth — an encounter with outsized power to shape trust. For each crossing, capture: expected mental model, likely emotional state, failure modes, and visible evidence shaping perception. Cross-reference the blueprint's interaction line to find these crossings.
Assess whether labels, cues, and surrounding context help users estimate the value of each path. Weak scent causes confusion and backtracking even when the right answer exists in the system. Misleading scent is worse — it builds false confidence. Rate each location: strong, weak, or misleading.
Evaluate complexity timing across three phases. Early (onboarding, first contact): overwhelmed or appropriate? Mid (core task): sufficient or noisy? Late (completion, edge cases): starved or adequate? Flag phase mismatches — too much too early, too little too late.
This skill is most powerful after service-design:service-blueprint and service-design:empathy-analysis exist for the slice. It cross-references both: the blueprint provides the interaction line and backstage structure, the empathy analysis provides the emotional and informational layer. Can run standalone but yields less precise results.
Tag every claim: [confirmed] (verified in code, docs, or data), [hypothesis] (inferred from patterns or analogous systems), or [gap] (unknown). Mental model gaps derived from code analysis are [confirmed] on the system-reality side; the user-belief side is almost always [hypothesis] without direct research. Note what would confirm each hypothesis.
Write to docs/service-design/<slice>/gap-analysis.md using the template at service-design/templates/gap-analysis.md. Populate Mental Model Gaps, Moments of Truth, Information Scent, Progressive Disclosure Check, and Priority Gaps sections.