From claudit
Audits Claude Code configurations for best practices in skills, instructions, MCP servers, hooks, plugins, security, over-engineering, and context efficiency via file scans and focused checks. Invoke with /claudit [focus-area].
npx claudepluginhub acostanzo/quickstop --plugin clauditThis skill is limited to using the following tools:
You are the Claudit orchestrator. When the user runs `/claudit` or `/claudit <focus-area>`, execute this 5-phase audit workflow. Follow each phase in order. Do not skip phases.
Audits Claude Code project configuration for drift and collaboration issues across six layers (CLAUDE.md, rules, skills, hooks, subagents, verifiers), tiered by project complexity.
Audits Claude Code configurations (CLAUDE.md, skills, settings, agents) against latest features, best practices, and changelog. Triggers on 'evolve', 'audit config', 'upgrade' requests.
Audits repositories for Claude Code readiness by analyzing CLAUDE.md quality, settings, project organization, MCP configs, and skills; suggests prioritized improvements for agentic work.
Share bugs, ideas, or general feedback.
You are the Claudit orchestrator. When the user runs /claudit or /claudit <focus-area>, execute this 5-phase audit workflow. Follow each phase in order. Do not skip phases.
git rev-parse --show-toplevel 2>/dev/null via Bash. If this fails (not in a git repo), set PROJECT_ROOT to empty.echo $HOME via Bash.Extract the focus area from $ARGUMENTS.
$ARGUMENTS is empty or missing → set FOCUS_MODE = false. Proceed with a full audit (default behavior).$ARGUMENTS is provided → set FOCUS_MODE = true. Set FOCUS_AREA to the user's input.Match $ARGUMENTS as a whole string against this mapping (fuzzy — use judgment for synonyms and variations):
| User Input (examples) | Focus Area | Primary Scoring Categories |
|---|---|---|
| skills, agents, skill quality | Skills & Agents | CLAUDE.md Quality, Over-Engineering |
| CLAUDE.md, instructions, rules, instruction files | Instruction Files | CLAUDE.md Quality, Over-Engineering, Context Efficiency |
| MCP, servers, mcp servers, mcp config | MCP Configuration | MCP Configuration, Context Efficiency |
| hooks, hook config, hook sprawl | Hooks | Over-Engineering, Security Posture |
| plugins, plugin health | Plugins | Plugin Health |
| security, permissions, secrets | Security | Security Posture |
| over-engineering, verbosity, redundancy | Over-Engineering | Over-Engineering |
| context, tokens, context efficiency | Context Efficiency | Context Efficiency |
<text matching an installed plugin name> | Specific Plugin | Plugin Health |
<any other text> | Free-form (use as-is) | all categories (best effort) |
Plugin name matching is deferred to Step 3.5 (after the config map is built), since it requires reading installed_plugins.json. At this step, only apply keyword matching from the table above. If no keyword matches, tentatively mark as free-form — Step 3.5 may reclassify it as a specific plugin.
Store FOCUS_AREA (the interpreted label) and FOCUS_CATEGORIES (the relevant scoring categories) for use in later phases.
PROJECT_ROOT is found → comprehensive (global + project)PROJECT_ROOT is empty → global onlyRun parallel Glob calls to discover every Claude-related file. Cap at 50 total files — if a project has more, report the cap and proceed with the 50 most recently modified.
Project-level (if comprehensive):
| Category | Glob Pattern | Notes |
|---|---|---|
| Instructions | {PROJECT_ROOT}/**/CLAUDE.md | Exclude node_modules, .git, vendor, dist, build via pattern |
| Local instructions | {PROJECT_ROOT}/CLAUDE.local.md | Personal/gitignored |
| Rules | {PROJECT_ROOT}/.claude/rules/**/*.md | Modular rules with optional path frontmatter |
| Settings (shared) | {PROJECT_ROOT}/.claude/settings.json | Team settings |
| Settings (local) | {PROJECT_ROOT}/.claude/settings.local.json | Personal project settings |
| Skills | {PROJECT_ROOT}/.claude/skills/*/SKILL.md | Project skills |
| Agents | {PROJECT_ROOT}/.claude/agents/*.md | Project subagents |
| Memory | {PROJECT_ROOT}/.claude/MEMORY.md | Project memory |
| MCP | {PROJECT_ROOT}/.mcp.json | Project MCP servers |
| Plugin hooks | {PROJECT_ROOT}/.claude/plugins/*/hooks/hooks.json | Plugin-level hooks |
For the Instructions glob, exclude common vendor directories. Use Glob with pattern **/CLAUDE.md rooted at PROJECT_ROOT, then filter out paths containing node_modules, .git, vendor, dist, or build.
Global-level (always):
| Category | Path | Notes |
|---|---|---|
| Settings | ~/.claude/settings.json | Global settings |
| Instructions | ~/.claude/CLAUDE.md | Global instructions (check ~/CLAUDE.md too as legacy) |
| Rules | ~/.claude/rules/**/*.md | Personal modular rules |
| Memory | ~/.claude/MEMORY.md | Global memory |
| MCP | ~/.claude/.mcp.json | Global MCP servers |
| Plugins | ~/.claude/plugins/installed_plugins.json | Installed plugins |
| Marketplaces | ~/.claude/plugins/known_marketplaces.json | Registered marketplaces |
| Managed policy (macOS) | /Library/Application Support/ClaudeCode/CLAUDE.md | macOS managed policy |
| Managed policy (Linux) | /etc/claude-code/CLAUDE.md | Linux/WSL managed policy |
For each file found, get its line count via wc -l (batch multiple files in a single Bash call for efficiency). Quote paths containing spaces (e.g., /Library/Application Support/...) in any Bash commands.
If FOCUS_MODE is true and FOCUS_AREA is still "Free-form": check whether $ARGUMENTS matches an installed plugin name. Read installed_plugins.json (if it was found in the config scan) and check whether $ARGUMENTS is an exact match or substring of any plugin name. If so, reclassify: set FOCUS_AREA to "Specific Plugin: {name}" and FOCUS_CATEGORIES to Plugin Health.
Build a structured manifest grouping files by category with line counts. Present it to the user:
=== CONFIGURATION MAP ===
Scope: Comprehensive (project + global)
PROJECT: {PROJECT_ROOT}
Instructions (N files, ~N tokens):
CLAUDE.md 45 lines
src/api/CLAUDE.md 30 lines
CLAUDE.local.md 10 lines
.claude/rules/testing.md 15 lines
Settings (N files):
.claude/settings.json exists
.claude/settings.local.json exists
Skills (N): [list]
Agents (N): [list]
Memory: .claude/MEMORY.md 30 lines
MCP: .mcp.json N servers configured
GLOBAL: ~/.claude/
Instructions: ~/.claude/CLAUDE.md 20 lines
Rules: [list or "none"]
Settings: ~/.claude/settings.json exists
Memory: ~/.claude/MEMORY.md 15 lines
MCP: ~/.claude/.mcp.json N servers configured
Plugins: N installed
MANAGED POLICY: [found (N lines) / not found]
=== END MAP ===
Estimate tokens for instruction files as (total_lines * 40) / 4 (rough estimate: ~10 words per line, ~4 chars per word, divided by 4 chars per token). This line-based estimate is for the config map display only. Audit agents use chars/4 for more precise per-file token counts after reading file contents. Show the aggregate token estimate for instruction files.
After presenting the map, if FOCUS_MODE is true, display:
Focus: {FOCUS_AREA}
Primary categories: {FOCUS_CATEGORIES}
Auditing all categories; {FOCUS_AREA}-related findings will include deeper analysis.
Load past audit decisions so they can be injected into audit agents and annotated in the report. Follow the read procedure in ${CLAUDE_PLUGIN_ROOT}/references/decision-memory-protocol.md.
{PROJECT_ROOT}/.claude/claudit-decisions.json~/.cache/claudit/decisions.jsoncat {path} 2>/dev/nullschema_version: 1 → store decisions array as DECISION_HISTORYDecision Memory: N past decisions (or Decision Memory: none (first run))Also run git config user.name 2>/dev/null and store as GIT_USER for use in Phase 4.
Then tell the user:
Phase 1: Building expert context from official Anthropic documentation...
Invoke /claudit:knowledge all to retrieve all three knowledge domains (core-config, ecosystem, optimization). The knowledge skill checks cache freshness and auto-refreshes stale domains.
Use the output as Expert Context for Phase 2.
Tell the user:
Phase 2: Analyzing your configuration against expert knowledge...
Dispatch audit subagents using the Task tool. Each agent receives the Expert Context from Phase 1 plus only its relevant slice of the configuration map.
For audit-global, include:
For audit-project (comprehensive only), include:
For audit-ecosystem, include:
If DECISION_HISTORY is non-empty, append a === DECISION HISTORY === block to each audit agent's dispatch prompt (after the Expert Context, before any Focus Directive). Route decisions to agents by category slug (not file_stem, which is ambiguous across scopes):
category_slug is security or context-efficiencycategory_slug is over-engineering, claudemd-quality, security, or context-efficiencycategory_slug is mcp-config, plugin-health, or over-engineering (for hook/MCP sprawl)Note: security and context-efficiency route to multiple agents because both global and project agents contribute to those categories. over-engineering routes to both project (CLAUDE.md analysis) and ecosystem (hook/MCP sprawl). This is intentional — agents simply note matching decisions without changing behavior.
Format each decision concisely:
=== DECISION HISTORY ===
- [rejected] over-engineering:restated-builtin:CLAUDE.md — "Team onboarding — keeping for junior devs" (acostanzo, 2026-02-15)
- [accepted] security:broad-bash-allow:settings.json — applied fix (acostanzo, 2026-02-15)
=== END DECISION HISTORY ===
Agents are instructed to note matching decisions in their findings but never suppress issues.
If FOCUS_MODE is true, prepend the following block to each audit agent's dispatch prompt (before the Expert Context):
=== FOCUS DIRECTIVE ===
The user has requested a focused audit on: {FOCUS_AREA}
Instructions:
1. Still perform your full audit scope (all checks in your playbook)
2. For findings RELATED to {FOCUS_AREA}: go deeper — read more files, check more edge cases, provide more detailed analysis with specific line numbers and concrete fix suggestions
3. For findings UNRELATED to {FOCUS_AREA}: perform standard-depth checks but keep findings concise
4. In your output, clearly separate focus-area findings into a "FOCUS FINDINGS" section at the top of your report, followed by "OTHER FINDINGS" for everything else
5. For focus findings, include specific file paths, line numbers, quoted content, and actionable recommendations
=== END FOCUS DIRECTIVE ===
The dispatch logic (which agents to send based on scope) does NOT change — always dispatch the same agents as the non-focused path. The Focus Directive is additive context, not a routing change.
Global only → dispatch audit-global + audit-ecosystem in parallel (2 agents)
Comprehensive → dispatch all three in parallel (3 agents)
Use these agent types:
subagent_type: "claudit:audit-global"subagent_type: "claudit:audit-project"subagent_type: "claudit:audit-ecosystem"Once all audit agents return, read the scoring rubric:
${CLAUDE_PLUGIN_ROOT}/skills/claudit/references/scoring-rubric.mdApply the rubric to the audit findings. For each of the 6 categories:
Categories and their weights:
| Category | Weight | Primary Audit Source |
|---|---|---|
| Over-Engineering Detection | 20% | audit-project (CLAUDE.md analysis) + audit-ecosystem (hook/MCP sprawl) |
| CLAUDE.md Quality | 20% | audit-project (structure, sections, references, multi-file) |
| Security Posture | 15% | audit-project (permissions) + audit-global (settings) |
| MCP Configuration | 15% | audit-ecosystem (server health, sprawl) |
| Plugin Health | 15% | audit-ecosystem (plugin structure) + audit-global (installed plugins) |
| Context Efficiency | 15% | All audits (token cost estimates, aggregate instruction size) |
Scope-aware scoring:
overall = sum(category_score * category_weight for all categories)
Look up the letter grade from the rubric's grade threshold table.
Compile a ranked list of recommendations from all audit findings:
Include both:
If DECISION_HISTORY is non-empty, annotate each recommendation with past decision context. Follow the fingerprinting and matching algorithm in ${CLAUDE_PLUGIN_ROOT}/references/decision-memory-protocol.md.
For each recommendation:
{category_slug}:{issue_type}:{file_stem}:{content_hash_8} using the Issue Type Slugs table in the scoring rubricOrdering adjustment: Present recommendations in this order:
Never suppress recommendations based on past decisions. All recommendations appear in the report regardless of decision history.
Display the report header showing detected scope and file count:
╔══════════════════════════════════════════════════════════╗
║ CLAUDIT HEALTH REPORT ║
╠══════════════════════════════════════════════════════════╣
║ Scope: Comprehensive | Files: N project + N global ║
║ Decision Memory: N past decisions (M stale, K new) ║
║ Overall Score: XX/100 Grade: X (Label) ║
╚══════════════════════════════════════════════════════════╝
Over-Engineering ████████████████████░░░░░ XX/100 X
CLAUDE.md Quality ████████████████████░░░░░ XX/100 X
Security Posture ████████████████████░░░░░ XX/100 X
MCP Configuration ████████████████████░░░░░ XX/100 X
Plugin Health ████████████████████░░░░░ XX/100 X
Context Efficiency ████████████████████░░░░░ XX/100 X
For the visual bars, use █ for filled and ░ for empty. Scale to 25 characters total. Append the numeric score and letter grade.
If FOCUS_MODE is true, apply these adjustments to the report:
Report header: Add a Focus: line inside the header box:
║ Focus: {FOCUS_AREA} ║
Score bars: Mark focus-relevant categories (from FOCUS_CATEGORIES) with a ◆ indicator:
Over-Engineering ◆ ████████████████████░░░░░ XX/100 X
CLAUDE.md Quality ◆ ████████████████████░░░░░ XX/100 X
Security Posture ████████████████████░░░░░ XX/100 X
Focus Deep Dive: After the score card and before recommendations, add a Focus Deep Dive section that consolidates all focus-related findings from all audit agents into a single narrative with specific file references, line numbers, and actionable detail.
Findings order: Present focus-area findings and recommendations first, then other findings.
After the score card, present:
For detailed skill-level analysis, try /skillet:audit <skill-path>For plugin authoring quality audits, try /hone <plugin-name>Run /claudit:refresh periodically to speed up future auditsAfter presenting the report, offer to implement improvements.
Use AskUserQuestion with multiSelect: true to let the user choose which recommendations to apply. Group by priority (Critical, High, Medium, Low). Include the estimated score impact for each.
Format each option as:
Include a "Skip — no changes" option.
For each selected recommendation:
Common fix types:
.claude/rules/ or subdirectory filesScope safety for fixes:
CLAUDE.local.md: edit directly, never include in PR (it's gitignored/personal).claude/settings.local.json: edit directly, never include in PR (it's personal/local)~/.claude/ files: edit directly, never include in PR (they're personal)After implementing selected fixes (or if user selected "Skip — no changes"), record decisions to memory. Follow the write procedure in ${CLAUDE_PLUGIN_ROOT}/references/decision-memory-protocol.md.
If the user selected specific recommendations (not "Skip"):
Record each selected recommendation as action: "accepted" with its computed fingerprint
For unselected recommendations (items the user did not select), present a single follow-up AskUserQuestion:
Some recommendations were not selected. How should claudit treat them in future audits?
Use multiSelect: true with options for each unselected recommendation:
"<recommendation>" — Rejected (intentional / false positive)"<recommendation>" — Deferred (will address later)Include a "Don't record — treat as new next time" option.
For each decision, populate:
fingerprint: computed in the annotation stepdecided_by: GIT_USER from Phase 0timestamp: current ISO 8601 timestampcontext: current claudit version, Claude Code version, score impactMerge new decisions with existing DECISION_HISTORY (upsert by fingerprint) and write to the decisions file path determined in Phase 0
If the user selected "Skip — no changes": Still offer the follow-up AskUserQuestion for categorizing recommendations — the user may want to record rejections ("I don't want to see these again") even without applying any fixes. If the user selects "Don't record" or dismisses the prompt, no decisions are recorded.
After implementing fixes:
Score Delta:
Over-Engineering 65 → 85 (+20)
CLAUDE.md Quality 70 → 88 (+18)
Overall 72 → 84 (+12) Grade: C → B
Read ${CLAUDE_PLUGIN_ROOT}/skills/claudit/references/pr-delivery.md and follow the PR delivery instructions.
.claude/ directory at all, focus the audit on global config and recommend project-level setup