From skill-checker
Validate, review, and improve Claude skills against Anthropic's official best practices from "The Complete Guide to Building Skills for Claude." Use when a user says "check my skill", "review this skill", "validate my SKILL.md", "is my skill good", "skill audit", "skill review", "proof check my skill", "grade my skill", or uploads a skill folder or SKILL.md for feedback. Also trigger when the user mentions skill quality, skill triggering issues, skill debugging, or wants to ensure a skill follows Anthropic's official guidelines before publishing or sharing.
npx claudepluginhub lauraflorentin/skills-marketplace --plugin skill-checkerThis skill uses the workspace's default tool permissions.
A comprehensive skill validation and review tool based on Anthropic's official "Complete Guide to Building Skills for Claude." This skill runs a multi-dimensional audit on any skill folder or SKILL.md and produces an actionable scorecard with specific fix recommendations.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
A comprehensive skill validation and review tool based on Anthropic's official "Complete Guide to Building Skills for Claude." This skill runs a multi-dimensional audit on any skill folder or SKILL.md and produces an actionable scorecard with specific fix recommendations.
Run this skill whenever a user:
The review runs in three phases:
Read the skill folder and check every item below. Report pass/fail for each.
my-cool-skill)SKILL.md (case-sensitive — not skill.md, SKILL.MD, etc.)README.md inside the skill folder (all docs belong in SKILL.md or references/)--- on both the opening and closing linesname field is presentname field uses kebab-case, no spaces, no capitalsname field matches the folder namedescription field is presentdescription is under 1024 characters< or >) anywhere in frontmatter (security restriction)license: recognized license identifier (e.g., MIT, Apache-2.0)compatibility: 1–500 characters, describes environment requirementsmetadata: valid key-value pairs (suggested: author, version, mcp-server)SKILL.md at root (required)scripts/, references/, assets/scripts/ exists, files are executable code (Python, Bash, etc.)references/ exists, files are documentation (.md, .txt, etc.)assets/ exists, files are templates, fonts, icons, etc.The description must answer two questions: WHAT the skill does and WHEN to use it.
Check for:
Grade the description:
Red flags:
Skills should use a three-level loading system to minimize token usage:
Check for:
references/, not inline in SKILL.mdreferences/aws.md, references/gcp.md) rather than all inlinedGrade:
Check for clarity and actionability:
python scripts/validate.py --input {filename}")Check for composability:
Check for portability:
compatibility fieldIdentify which pattern(s) the skill uses and whether it applies them well:
Pattern 1: Sequential Workflow Orchestration
Pattern 2: Multi-MCP Coordination
Pattern 3: Iterative Refinement
Pattern 4: Context-Aware Tool Selection
Pattern 5: Domain-Specific Intelligence
Grade the pattern usage:
Classify the skill into one of the three standard categories:
Note: Skills can span categories. Identify the primary and any secondary categories.
Score each dimension 1–5:
| Dimension | 1 (Failing) | 3 (Adequate) | 5 (Excellent) |
|---|---|---|---|
| Structure | Missing SKILL.md or broken YAML | Valid structure, minor issues | Perfect folder layout, all conventions followed |
| Description | Missing or vague | Answers WHAT and WHEN | Specific triggers, file types, negative triggers, pushiness |
| Progressive Disclosure | Everything in one giant file | Some separation | Clean 3-level hierarchy, lean SKILL.md |
| Instruction Clarity | Vague, no examples | Clear steps, some examples | Imperative, examples, error handling, explains WHY |
| Error Handling | None | Basic error messages | Comprehensive troubleshooting, rollback, common issues |
| Composability | Conflicts with other skills | Works in isolation | Explicitly designed for multi-skill environments |
| Testing Readiness | No testable outputs | Some verifiable outputs | Clear success criteria, assertions possible |
Present the results as:
# Skill Review: [skill-name]
## Overall Score: X/35 ([rating])
### Structural Validation
✅ / ❌ [each check with pass/fail]
### Content Quality
| Dimension | Score | Notes |
|---|---|---|
| Description | X/5 | [specific feedback] |
| Progressive Disclosure | X/5 | [specific feedback] |
| Instruction Clarity | X/5 | [specific feedback] |
| Error Handling | X/5 | [specific feedback] |
| Composability | X/5 | [specific feedback] |
| Testing Readiness | X/5 | [specific feedback] |
### Pattern Analysis
Primary pattern: [pattern name]
Pattern execution: [grade]
### Category
Primary: [category]
Secondary: [category, if applicable]
### Top 3 Fixes (Prioritized)
1. **[Priority: High/Medium/Low]** — [specific, actionable fix]
2. **[Priority: High/Medium/Low]** — [specific, actionable fix]
3. **[Priority: High/Medium/Low]** — [specific, actionable fix]
### Description Rewrite (if score < 4)
Suggested improved description:
[rewritten description]
If the user just wants a fast pass (e.g., "quick check my skill"), skip the full audit and run only:
This should take under 2 minutes and give the user enough to act on immediately.
If the skill folder is available on the filesystem, run the automated structural checks:
python scripts/validate_skill.py <path-to-skill-folder>
This script checks file naming, YAML parsing, folder structure, description length, and forbidden content. It outputs a JSON report that feeds into the scorecard.
SKILL.md--- delimiters