This skill should be used when the user asks to "audit onboarding", "check setup experience", "test quickstart", "evaluate time-to-first-success", "review installation steps", "assess new user experience", or needs to evaluate how easy it is for a new user to go from zero to first meaningful result.
From solution-auditnpx claudepluginhub nsalvacao/nsalvacao-claude-code-plugins --plugin solution-auditThis skill uses the workspace's default tool permissions.
references/assessment-criteria.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Assess how easy it is for a new user to achieve their first meaningful success with the solution. Onboarding is the most critical user experience moment — if it fails, nothing else matters.
Onboarding quality measures the friction between "I found this project" and "I got my first useful result." Every unnecessary step, unclear instruction, or silent failure is a potential abandonment point. The goal is minimum viable path to value.
Locate the documented entry point for new users:
If multiple paths exist, audit the most prominent one (typically README). Flag if the primary onboarding path is unclear or hard to find.
Check that all requirements are explicitly stated:
Flag missing prerequisites that a new user would discover only by hitting an error. Check if version constraints are specific enough (e.g., "Node.js 18+" vs just "Node.js").
Follow each installation step as documented and evaluate:
Count the total number of steps from "clone/install" to "first run." Flag if it exceeds 5 steps for a simple tool or 10 for a complex system.
Assess what happens when the user runs the solution for the first time:
Flag first-run experiences that:
Evaluate what happens when onboarding goes wrong:
Check for common onboarding failure scenarios and whether the solution handles them gracefully.
Assess how much a new user needs to understand before being productive:
Flag onboarding that requires understanding internal architecture, complex configuration, or multiple new concepts before the user can do anything useful.
Verify that complexity is introduced gradually:
Flag when advanced concepts are introduced too early in the onboarding path.
Check if onboarding addresses different user types:
At minimum, the end-user path must be clear. Flag if the primary onboarding assumes developer knowledge when the tool targets non-developers.
| Severity | Criteria | Example |
|---|---|---|
| Critical | Onboarding is broken or blocks new users | Install steps fail, missing critical prerequisite |
| Warning | Onboarding has friction or gaps | Unclear steps, missing platform coverage |
| Info | Minor onboarding improvements possible | Could add a quickstart example, minor wording |
For each finding, report:
[SEVERITY] Category: Brief description
Step: Which onboarding step (or "Prerequisites", "First Run", etc.)
Issue: What the problem is
Impact: How this affects new users
Fix: Specific action to resolve
Include a summary metric:
Onboarding Steps: N (from clone to first success)
Prerequisites Documented: X/Y (Y inferred from actual requirements)
Estimated Time-to-First-Success: ~N minutes
Blockers Found: N critical, N warning
Start at 100, subtract per finding:
Score reflects the probability that a new user succeeds on their first attempt.