This skill should be used when the user asks to "audit text UX", "check error messages", "review microcopy", "assess user-facing text", "check tone consistency", "find jargon leakage", "audit messaging quality", or needs to evaluate the quality and consistency of all user-facing text in a solution.
From solution-auditnpx claudepluginhub nsalvacao/nsalvacao-claude-code-plugins --plugin solution-auditThis skill uses the workspace's default tool permissions.
references/tone-patterns.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Audit the quality of all user-facing text — the voice and communication style of the solution. Every message, prompt, error, and label is a micro-interaction that shapes user perception and productivity.
Textual UX measures how well a solution communicates with its users through text. Good textual UX means clear errors, consistent tone, appropriate verbosity, and zero internal jargon leakage. Text is often the only interface between a tool and its user.
Scan the codebase to catalog all text that reaches users:
Use Grep to find common output patterns:
console.log, console.error, print, logging., log.raise, throw, Error(, panicFor each error message found, evaluate:
Flag error messages that:
Quality template for good errors:
Error: [What happened] — [Why]
[Context: file, input, value]
Hint: [How to fix]
Evaluate the overall voice across all text:
Flag tone shifts:
Find internal jargon exposed to end users:
Flag any text where a non-developer user would not understand a term without reading the source code.
Assess whether text output is appropriately verbose:
Check for:
Verify that different message types are distinguishable:
Check for common textual bugs:
For destructive or important operations:
| Severity | Criteria | Example |
|---|---|---|
| Critical | Text actively confuses or misleads users | Error message suggests wrong fix, jargon-only errors |
| Warning | Text has quality or consistency issues | Tone inconsistency, missing error context |
| Info | Minor text improvements possible | Grammar fix, better word choice |
For each finding, report:
[SEVERITY] Category: Brief description
File: source-file:line
Text: "the actual user-facing text"
Issue: What is wrong with it
Suggestion: Improved version
Start at 100, subtract per finding:
Score reflects how well the solution communicates with its users.