Evaluate and improve Claude skill quality through auditing. Use when reviewing skill quality, preparing skills for production, or auditing existing skills. Do not use when creating new skills (use modular-skills) or writing prose (use writing-clearly-and-concisely). Use this skill before shipping any skill to production.
Audits Claude skills for quality issues and generates prioritized improvement plans before production deployment.
/plugin marketplace add athola/claude-night-market/plugin install abstract@claude-night-marketThis skill inherits all available tools. When active, it can use any tool Claude has access to.
README.mdmodules/advanced-tool-use-analysis.mdmodules/authoring-checklist.mdmodules/evaluation-criteria.mdmodules/evaluation-framework.mdmodules/evaluation-workflows.mdmodules/integration-testing.mdmodules/integration.mdmodules/multi-metric-evaluation-methodology.mdmodules/performance-benchmarking.mdmodules/pressure-testing.mdmodules/quality-metrics.mdmodules/skill-authoring-best-practices.mdmodules/trigger-isolation-analysis.mdmodules/troubleshooting.mdscripts/README.mdscripts/automation/deploy.shscripts/automation/setup.pyscripts/automation/validate.pyscripts/compliance_checker.pyThis framework audits Claude skills against quality standards to improve performance and reduce token consumption. Automated tools analyze skill structure, measure context usage, and identify specific technical improvements. Run verification commands after each audit to confirm fixes work correctly.
The skills-auditor provides structural analysis, while the improvement-suggester ranks fixes by impact. Compliance is verified through the compliance-checker. Runtime efficiency is monitored by tool-performance-analyzer and token-usage-tracker.
Run a full audit of all skills or target a specific file to identify structural issues.
# Audit all skills
make audit-all
# Audit specific skill
make audit-skill TARGET=path/to/skill/SKILL.md
Use skill_analyzer.py for complexity checks and token_estimator.py to verify the context budget.
make analyze-skill TARGET=path/to/skill/SKILL.md
make estimate-tokens TARGET=path/to/skill/SKILL.md
Generate a prioritized plan and verify standards compliance using improvement_suggester.py and compliance_checker.py.
make improve-skill TARGET=path/to/skill/SKILL.md
make check-compliance TARGET=path/to/skill/SKILL.md
Start with make audit-all to inventory skills and identify high-priority targets. For each skill requiring attention, run analysis with analyze-skill to map complexity. Generate an improvement plan, apply fixes, and run check-compliance to verify the skill meets project standards. Finalize by checking the token budget for efficiency.
Quality assessments use the skills-auditor and improvement-suggester to generate detailed reports. Performance analysis focuses on token efficiency through the token-usage-tracker and tool performance via tool-performance-analyzer. For standards compliance, the compliance-checker automates common fixes for structural issues.
We evaluate skills across five dimensions: structure compliance, content quality, token efficiency, activation reliability, and tool integration. Scores above 90 represent production-ready skills, while scores below 50 indicate critical issues requiring immediate attention.
Improvements are prioritized by impact. Critical issues include security vulnerabilities or broken functionality. High-priority items cover structural flaws that hinder discoverability. Medium and low priorities focus on best practices and minor optimizations.
Deprecated: skills/shared/modules/ directories. Shared modules must be relocated into the consuming skill's own modules/ directory. The evaluator flags any remaining skills/shared/ as a structural warning.
Current: Each skill owns its modules at skills/<skill-name>/modules/. Cross-skill references use relative paths (e.g., ../skill-authoring/modules/anti-rationalization.md).
modules/trigger-isolation-analysis.mdmodules/skill-authoring-best-practices.mdmodules/authoring-checklist.mdmodules/evaluation-workflows.mdmodules/quality-metrics.mdmodules/advanced-tool-use-analysis.mdmodules/evaluation-framework.mdmodules/integration.mdmodules/troubleshooting.mdmodules/pressure-testing.mdmodules/integration-testing.mdmodules/multi-metric-evaluation-methodology.mdmodules/performance-benchmarking.mdscripts/ directory.scripts/automation/.Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.