Use when reviewing plugin quality, auditing plugins, analyzing the marketplace, checking plugins against Anthropic standards, or evaluating plugin architecture - provides systematic analysis methodology with validation framework
Systematically analyzes Claude Code plugins against Anthropic quality standards, checking for redundancy, broken references, and overengineering. Triggers when reviewing plugin quality, auditing marketplace plugins, or evaluating architecture against official guidelines.
/plugin marketplace add pproenca/dot-claude-old/plugin install meta@dot-claudeThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/measuring-improvements.mdreferences/output-patterns.mdreferences/quality-standards.mdreferences/skill-design-standards.mdreferences/workflows.mdscripts/analyze-metrics.shAnalyze Claude Code plugins to achieve Anthropic-level quality standards.
Anthropic Quality Bar: Same or more functionality with leaner, more efficient implementation.
Principles:
For each skill, verify bundled references exist:
Extract paths from SKILL.md:
references/*.md mentionsscripts/*.sh or scripts/*.py mentions[text](relative/path)Validate each path:
Report:
Before proposing ANY change:
Red flags:
## Priority 1: High Impact, Low Effort
- [ ] [Change] - [Why] - [Expected impact] - [How to validate]
## Priority 2: Medium Impact
...
## Priority 3: Consider Later
...
Each recommendation includes validation approach.
For detailed guidance:
references/skill-design-standards.md - Official Anthropic skill-creator guide (authoritative source for skill structure, frontmatter, progressive disclosure)references/quality-standards.md - Quality criteria checklist, anti-patterns (includes summary of official standards)references/measuring-improvements.md - Metrics, user testing, validation templatesreferences/output-patterns.md - Template and examples patterns for consistent outputreferences/workflows.md - Sequential and conditional workflow patternsUse scripts/analyze-metrics.sh for consistent metric collection.
Verify best practices via claude-code-guide subagent before claiming something is "wrong."
When implementing improvements:
core:verification skill before claiming completeNever claim "improved" or "fixed" without verification evidence.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.