Systematically analyze agent plugins and skills to extract design patterns, architectural decisions, and reusable techniques. Trigger with "analyze this plugin", "mine patterns from", "review plugin structure", "extract learnings from", "what patterns does this plugin use", "check if this plugin is well-structured", "validate plugin compliance", or when examining any plugin or skill collection to understand its design. Use this skill even when the user just says "look at this plugin" or "tell me how this is structured."
From agent-plugin-analyzernpx claudepluginhub richfrem/agent-plugins-skills --plugin agent-plugin-analyzerThis skill is limited to using the following tools:
acceptance-criteria.mdassets/resources/analyze-plugin-flow.mmdevals/evals.jsonevals/results.tsvfallback-tree.mdreferences/acceptance-criteria.mdreferences/analysis-framework.mdreferences/analysis-questions-by-type.mdreferences/diagrams/analyze-plugin-flow.mmdreferences/fallback-tree.mdreferences/inventory_plugin.pyreferences/maturity-model.mdreferences/output-templates.mdreferences/pattern-catalog.mdreferences/security-checks.mdrequirements.txtscripts/inventory_plugin.pyCreates consistent pitch decks, one-pagers, investor memos, financial models, accelerator apps, and fundraising materials from a single source of truth.
Provides expertise on electricity/gas procurement, tariff optimization, demand charge management, renewable PPA evaluation, hedging, load profiling, and multi-facility energy strategies.
Provides demand forecasting, safety stock optimization, replenishment planning, and promotional lift estimation for multi-location retailers managing 300-800 SKUs.
This skill requires Python 3.8+ and standard library only. No external packages needed.
To install this skill's dependencies:
pip-compile ./requirements.in
pip install -r ./requirements.txt
See ../../requirements.txt for the dependency lockfile (currently empty — standard library only).
Perform deep structural and content analysis on agent plugins and skills. Extract reusable patterns that feed the virtuous cycle of continuous improvement.
Deep-dive into one plugin. Use when you want to fully understand a plugin's architecture.
Analyze multiple plugins side-by-side. Use when looking for common patterns across a collection.
Execute these phases sequentially. Do not skip phases.
Before deep analysis, run a rapid compliance scan to surface blockers:
Manifest check:
# plugin.json must be in .claude-plugin/ (not root)
ls .claude-plugin/plugin.json && jq . .claude-plugin/plugin.json
name present and kebab-case (no spaces, no uppercase)?version follows semver (X.Y.Z) if present?Structure check:
commands/, agents/, skills/, hooks/) at plugin ROOT (not inside .claude-plugin/)?SKILL.md (not README.md) inside each skill directory?Security scan:
# Hardcoded credentials
grep -rn "password\|api_key\|secret" --include="*.md" --include="*.json" --include="*.sh" .
# Hardcoded paths (should use ${CLAUDE_PLUGIN_ROOT})
grep -rn "/Users/\|/home/" --include="*.json" --include="*.sh" .
Report Phase 0 findings before proceeding. If CRITICAL issues found (invalid JSON, hardcoded credentials, missing required fields), flag them prominently in the final report.
Run the deterministic inventory script first:
python3 "scripts/inventory_plugin.py" --path <plugin-dir> --format json
If the script is unavailable, manually enumerate:
Walk the directory tree
Classify every file by type:
SKILL.md → Skill definitioncommands/*.md → Command definitionreferences/*.md → Reference material (progressive disclosure)scripts/*.py → Executable scriptsREADME.md → Plugin documentationplugin.json → Plugin manifest*.json → Configuration (MCP, hooks, etc.)*.yaml / *.yml → Pipeline/config data*.html → Artifact templates*.mmd → Architecture diagramsRecord for each file: path, type, line count, byte size
Output a structured inventory as a markdown checklist with one checkbox per file
Evaluate the plugin's architectural decisions:
| Dimension | What to Look For |
|---|---|
| Layout | How are skills/commands/references organized? Flat vs nested? |
| Progressive Disclosure | Is SKILL.md lean (<500 lines) with depth in references/? |
| Component Ratios | Skills vs commands vs scripts — what's the balance? |
| Naming Patterns | Are names descriptive? Follow kebab-case? Use gerund form? |
| README Quality | Does it have a file tree? Usage examples? Architecture diagram? |
| Standalone vs Supercharged | Can it work without MCP tools? What's enhanced with them? |
For each file, load the appropriate question set from references/analysis-questions-by-type.md and work through every checkbox. See the process diagram in analyze-plugin-flow.mmd for the full pipeline visualization.
For each SKILL.md, evaluate:
Frontmatter Quality:
description written in third person?Body Structure:
references/ for deep content?Interaction Design:
For Commands, evaluate:
For Reference Files, evaluate:
For Scripts, evaluate:
--help documentation?Identify instances of known patterns from references/pattern-catalog.md. Also watch for novel patterns not yet cataloged.
For each pattern found, document:
Pattern: [name]
Plugin: [where found]
File: [specific file]
Description: [how it's used here]
Quality: [exemplary / good / basic]
Reusability: [high / medium / low]
Confidence: [high (≥3 plugins) / medium (2) / low (1)]
Lifecycle: [proposed / validated / canonical / deprecated]
Before adding a new pattern, check the catalog's deduplication rules. If an existing pattern covers ≥80% of the behavior, update its frequency instead.
Key pattern categories to search for:
Load the full check tables from references/security-checks.md.
Execution order:
If inventory_plugin.py was run with --security, use its deterministic findings as ground truth.
Load the maturity model and scoring rubric from references/maturity-model.md.
Steps:
Generate a structured markdown report. For single plugins, output inline. For collections, create an artifact file with the full analysis.
Iteration Directory Isolation: All analysis reports must be saved into explicitly versioned and isolated outputs (e.g. analysis-reports/target-run-1/) to prevent destructive overrides on re-runs.
Asynchronous Benchmark Metric Capture: Once the audit run completes, immediately log the resulting total_tokens and duration_ms to a timing.json file to calculate the cost of the deep-dive analysis.
Always end with Virtuous Cycle Recommendations: specific, actionable improvements for agent-plugin-analyzer (this plugin), agent-scaffolders, and agent-skill-open-specifications based on patterns discovered.