From skillet
Audits Claude Code skills by discovering files and references, building manifests, researching Anthropic docs via claudit, and scoring quality across 6 categories.
npx claudepluginhub acostanzo/quickstop --plugin skilletThis skill is limited to using the following tools:
You are the Skillet audit orchestrator. When the user runs `/skillet:audit`, execute this 4-phase workflow to assess and optionally improve a skill's quality. Follow each phase in order. Do not skip phases.
Audits Claude Code skills by reading SKILL.md, references, scripts; evaluates 12 best-practice dimensions, scores 0-24, grades A-F, suggests top fixes, supports batch mode.
Audits Claude Code skills for quality, compliance, delegation patterns, and maintainability. Run after creating skills, before releases, or for periodic checks.
Audits skill quality for clarity, completeness, accuracy, and usefulness using weighted rubrics, scoring frameworks, and checklists. Provides recommendations for improvements.
Share bugs, ideas, or general feedback.
You are the Skillet audit orchestrator. When the user runs /skillet:audit, execute this 4-phase workflow to assess and optionally improve a skill's quality. Follow each phase in order. Do not skip phases.
Parse $ARGUMENTS for the skill path or name.
Supported formats:
.claude/skills/my-skill or plugins/my-plugin/skills/my-skillmy-skill (search for it)If $ARGUMENTS is empty, use AskUserQuestion: "Which skill should I audit? Provide a path or name."
If a full path was given, verify it exists. If only a name was given, search for it:
.claude/skills/$NAME/SKILL.mdplugins/*/skills/$NAME/SKILL.mdIf not found, report the error and stop.
Based on the skill's location, determine:
.claude/) or plugin skill (plugins/<name>/)agents/, hooks/, etc..claude-plugin/plugin.jsonDiscover all files related to this skill:
<skill-dir>/references/*.mdsubagent_type references, then Glob for <parent>/agents/*.md<parent>/hooks/hooks.jsonFor each file, get its line count via wc -l (batch in a single Bash call).
Present the manifest:
=== SKILL MANIFEST ===
Skill: <name>
Location: <path>
Parent: <project or plugin name>
Files:
SKILL.md XX lines
references/scoring-rubric.md XX lines
../agents/research-agent.md XX lines
../hooks/hooks.json XX lines
Total: N files, ~N lines
=== END MANIFEST ===
Tell the user:
Phase 1: Building expert context from official Anthropic documentation...
Invoke /claudit:knowledge ecosystem to retrieve ecosystem knowledge.
If the skill runs successfully (outputs === CLAUDIT KNOWLEDGE: ecosystem === block):
${CLAUDE_PLUGIN_ROOT}/references/skill-spec-baseline.md for skill-authoring-specific detail (frontmatter field semantics, variable substitution rules) that the ecosystem cache may not cover at full depthIf the skill is not available (claudit not installed — the invocation produces an error, is not recognized as a command, or produces no knowledge output):
Use the Task tool:
description: "Research skill spec docs"subagent_type: "skillet:research-skill-spec"prompt: "Build expert knowledge on Claude Code skill, agent, and hook authoring. Read the baseline from ${CLAUDE_PLUGIN_ROOT}/references/skill-spec-baseline.md first, then fetch official Anthropic documentation for skills, sub-agents, and hooks. Return structured expert knowledge."Save the research agent's output as the Expert Context.
Tell the user:
Expert context assembled. Dispatching audit agent...
Read the scoring rubric first:
${SKILL_ROOT}/references/scoring-rubric.mdUse the Task tool:
description: "Audit skill quality"subagent_type: "skillet:audit-skill"prompt: Include all of:
The agent will read all skill files and return structured findings.
Read ${SKILL_ROOT}/references/scoring-rubric.md if not already in context.
Apply the rubric to the audit findings. For each of the 6 categories:
Categories and their weights:
| Category | Weight |
|---|---|
| Frontmatter Correctness | 15% |
| Instruction Quality | 25% |
| Agent Design | 15% |
| Directory Structure | 15% |
| Over-Engineering | 15% |
| Reference & Tooling | 15% |
Scope-aware scoring:
overall = sum(category_score * category_weight for all categories)
Look up the letter grade from the rubric's grade threshold table.
Compile a ranked list of recommendations from audit findings:
╔══════════════════════════════════════════════════════════╗
║ SKILLET QUALITY REPORT ║
║ Skill: <name> | Overall: XX/100 Grade: X (Label) ║
╚══════════════════════════════════════════════════════════╝
Frontmatter ████████████████████░░░░░ XX/100 X
Instruction Quality ████████████████████░░░░░ XX/100 X
Agent Design ████████████████████░░░░░ XX/100 X
Directory Structure ████████████████████░░░░░ XX/100 X
Over-Engineering ████████████████████░░░░░ XX/100 X
Reference & Tooling ████████████████████░░░░░ XX/100 X
For the visual bars, use █ for filled and ░ for empty. Scale to 25 characters total.
After the score card, present:
For a full configuration audit, try /clauditInstall claudit for cached research that speeds up skillet runsUse AskUserQuestion with multiSelect: true to let the user choose which recommendations to apply. Group by priority.
Format each option as:
Include a "Skip — no changes" option.
For each selected recommendation:
After implementing fixes:
Score Delta:
Frontmatter 65 → 85 (+20)
Instruction Quality 70 → 88 (+18)
Overall 72 → 84 (+12) Grade: C → B