Audit an existing skill against best practices — checks structure, frontmatter, invocation control, description quality, and anti-patterns. Use when verifying a skill before deployment.
From interskillnpx claudepluginhub mistakeknot/interagency-marketplace --plugin interskillThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Verify a skill follows best practices and the official specification.
Given a skill path (SKILL.md or skill directory), check each item below. Report findings as PASS/WARN/FAIL.
--- delimitersname field present (lowercase, hyphens, max 64 chars)description field present and specific (includes trigger keywords)gen-skill-compact.sh --check <dir> to verify freshness)disable-model-invocation: true set if skill has side effects (deploy, commit, send)user-invocable: false set if skill is background knowledge onlyallowed-tools declared if specific tools neededcontext: fork used for isolation where appropriateSkill Audit: {skill-name}
──────────────────────────────
Structure: {N}/6 pass
Invocation Control: {N}/4 pass
Content Quality: {N}/5 pass
Anti-Patterns: {N}/4 pass
──────────────────────────────
Overall: {PASS | WARN (N issues) | FAIL (N issues)}
Issues:
- [WARN] Description could be more specific
- [FAIL] Missing disable-model-invocation for deploy workflow