Review and improve existing Claude Code skills. Use when evaluating skill quality, auditing skill collections, asking "review my skill", "is this skill effective", "improve skill description", or maintaining a skill library.
From dev-skillsnpx claudepluginhub igor1309/skills --plugin dev-skillsThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Announce: "I'm using the skill-review skill to evaluate skill quality."
Modern agents (Opus 4.6, Sonnet 4.5) are highly capable reasoners. Skills should provide context the agent DOESN'T already have — project-specific knowledge, non-obvious conventions, fragile sequences. They should NOT explain general concepts, babysit through obvious steps, or micromanage decisions the agent can make better in context.
The deletion test: if you removed a sentence and the agent would still do the right thing, that sentence is noise.
The preservation test: if a section contains specific values (country codes, error strings, exact enum cases, character pairs, file paths), assume it encodes a debugging discovery until proven otherwise. Domain knowledge looks like noise to outsiders — the author had a reason. Your job is to find it or ask, not assume it's absent.
If you lack domain context to judge most of a skill's content, say so upfront. Scope the review to what you can evaluate — structure, description quality, staleness — and flag the rest as beyond your confidence. A partial honest review beats a complete fabricated one.
Does each section teach something the agent doesn't already know? Noise candidates:
Every token competes with conversation history for context window space.
Escalation rule: when you can't explain WHY a section exists — it looks like a random detail — that's a signal to mark it "ask user" rather than "noise." Intentional redundancy (e.g., a critical warning repeated at two decision points) is a pattern, not a mistake.
Per skill:
**Skill:** `name`
**Verdict:** Keep as-is / Needs refinement / Needs rewrite / Consider retiring
**Section-by-section** (every section must appear):
- [signal / noise / ask user]: [section name]
Reason: [specific: duplicates AGENTS.md §X / general knowledge / encodes gotcha / ...]
**Strengths:**
- [What works well]
**Suggested changes** (only for sections marked noise, with specific fix)
Rules for the section-by-section audit: