ALWAYS invoke this skill when auditing, reviewing, or evaluating slash command .md files. NEVER audit slash commands without this skill.
From claudenpx claudepluginhub outcomeeng/claude --plugin claudeThis skill uses the workspace's default tool permissions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
This ensures commands follow security, clarity, and effectiveness standards. </objective>
<quick_start>
$ARGUMENTS</quick_start>
<constraints> - NEVER modify files during audit - ONLY analyze and report findings - MUST read all reference documentation before evaluating - ALWAYS provide file:line locations for every finding - DO NOT generate fixes unless explicitly requested by the user - NEVER make assumptions about command intent - flag ambiguities as findings - MUST complete all evaluation areas (YAML, Arguments, Dynamic Context, Tool Restrictions, Content) - ALWAYS apply contextual judgment based on command purpose and complexity </constraints><focus_areas> During audits, prioritize evaluation of:
</focus_areas>
<critical_workflow> MANDATORY: Read best practices FIRST, before auditing:
.claude/plugins/cache/**/creating-commands/SKILL.mdreferences/arguments.md, references/patterns.md, references/tool-restrictions.md$ARGUMENTSUse ACTUAL patterns from references, not memory. </critical_workflow>
<evaluation_areas> <area name="yaml_configuration"> Check for:
<contextual_judgment> Apply judgment based on command purpose and complexity:
Simple commands (single action, no state):
State-dependent commands (git, environment-aware):
Security-sensitive commands (git push, deployment, file modification):
Delegation commands (invoke subagents):
allowed-tools: Task is appropriateAlways explain WHY something matters for this specific command, not just that it violates a rule. </contextual_judgment>
<output_format> Audit reports use severity-based findings, not scores. Generate output using this markdown template:
## Audit Results: [command-name]
### Assessment
[1-2 sentence overall assessment: Is this command fit for purpose? What's the main takeaway?]
### Critical Issues
Issues that hurt effectiveness or security:
1. **[Issue category]** (file:line)
- Current: [What exists now]
- Should be: [What it should be]
- Why it matters: [Specific impact on this command's effectiveness/security]
- Fix: [Specific action to take]
2. ...
(If none: "No critical issues found.")
### Recommendations
Improvements that would make this command better:
1. **[Issue category]** (file:line)
- Current: [What exists now]
- Recommendation: [What to change]
- Benefit: [How this improves the command]
2. ...
(If none: "No recommendations - command follows best practices well.")
### Strengths
What's working well (keep these):
- [Specific strength with location]
- ...
### Quick Fixes
Minor issues easily resolved:
1. [Issue] at file:line → [One-line fix]
2. ...
### Context
- Command type: [simple/state-dependent/security-sensitive/delegation]
- Line count: [number]
- Security profile: [none/low/medium/high - based on what the command does]
- Estimated effort to address issues: [low/medium/high]
</output_format>
<validation> Before presenting audit findings, verify:Completeness checks:
Accuracy checks:
Quality checks:
Only present findings after all checks pass. </validation>
<success_criteria> Task is complete when:
</success_criteria>
<final_step> After presenting findings, offer:
</final_step>