Audit all skills and commands across installed plugins for quality. Supports Quick Scan (changed skills only) and Full Stocktake modes with per-skill pass/fail verdicts.
From cc-setupnpx claudepluginhub krzemienski/cc-setup --plugin cc-setupThis skill uses the workspace's default tool permissions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
Audits all Claude skills across global and plugin paths. Two modes: Quick Scan (changed skills only) and Full Stocktake (complete review). Outputs a summary report with verdicts per skill.
| Path | Description |
|---|---|
~/.claude/skills/ | Global skills |
~/.claude/plugins/*/skills/ | Plugin skills |
{cwd}/.claude/skills/ | Project skills (if present) |
{cwd}/skills/ | Plugin source skills (if present) |
List found paths at the start of Phase 1.
| Mode | Trigger | Duration |
|---|---|---|
| Quick Scan | results.json exists | 5–10 min |
| Full Stocktake | No results.json, or full arg | 20–30 min |
Results cache: ~/.claude/skills/skill-stocktake/results.json
results.json; collect mtimes of all skill filesresults.jsonEnumerate all SKILL.md files under the scoped paths. For each, extract:
name, description, triggersdate -r <file> -u +%Y-%m-%dT%H:%M:%SZ)Print inventory table:
| Skill | Path | Last Modified |
|---|
Read each skill file and apply this checklist:
- [ ] YAML frontmatter present and valid (name, description, triggers)
- [ ] Description is specific and actionable (not generic)
- [ ] Triggers cover realistic invocation patterns
- [ ] Content is structured (headers, examples, or steps)
- [ ] No substantial overlap with another skill or CLAUDE.md rule
- [ ] Technical references are current (WebSearch if CLI flags / APIs present)
Verdict per skill:
| Verdict | Meaning |
|---|---|
| Pass | Well-formed, unique, actionable |
| Improve | Worth keeping; specific change needed |
| Update | Technical content outdated |
| Retire | Low value, stale, or fully covered elsewhere |
| Merge | Substantial overlap; name the target |
Evaluation is holistic AI judgment, not a numeric score. Reason must be specific:
Process up to 20 skills per subagent (Task tool, explore agent, model: opus). Save status: "in_progress" after each chunk; resume from first unevaluated skill if interrupted.
Print counts (Pass / Improve / Update / Retire / Merge) then a table:
| Skill | Verdict | Reason |
|---|
~/.claude/skills/skill-stocktake/results.json — fields: evaluated_at (UTC via date -u +%Y-%m-%dT%H:%M:%SZ), mode, batch_progress (total, evaluated, status), skills (keyed by name: path, verdict, reason, mtime).