From code-quality
Deep code quality audit system. Use when asked to: - "Audit the codebase" - "Find unused code" - "Check for duplicates" - "Validate library usage" - "Review the entire project" Analyzes files in parallel with LSP and Context7, detecting issues, duplicates, and documentation drift.
npx claudepluginhub wgordon17/personal-claude-marketplace --plugin code-qualityThis skill is limited to using the following tools:
Comprehensive, resumable code quality audit system that analyzes every non-gitignored file in a project to build a complete inventory of symbols, dependencies, issues, and duplicates.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Comprehensive, resumable code quality audit system that analyzes every non-gitignored file in a project to build a complete inventory of symbols, dependencies, issues, and duplicates.
/file-audit # Analyze entire project
/file-audit src/ # Analyze specific directory
/file-audit --resume # Resume interrupted audit
/file-audit --status # Show progress
/file-audit path/to/file.py # Analyze single file
ORCHESTRATOR (you)
├── Phase 1: Discovery
│ ├── git ls-files (find non-gitignored files)
│ ├── Read project memory ({memory_dir}/PROJECT.md, {memory_dir}/TODO.md, {memory_dir}/LESSONS.md)
│ └── Initialize queue.json
│
├── Phase 2: Parallel Analysis
│ ├── Spawn 3-5 file analyzer agents IN PARALLEL
│ ├── Each analyzes one file independently
│ └── Collect results + extracted patterns
│
├── Phase 3: Post-Analysis
│ ├── Run duplicate detection across ALL patterns
│ ├── Cross-reference duplicates across files
│ └── Inject duplicate issues into file entries
│
└── Phase 4: Finalization
├── Assemble inventory.json
├── Generate TODOs
└── Write summary
Detect the memory directory using the convention in
code-quality/references/project-memory-reference.md (Directory Detection section).
# Check for existing queue
if {memory_dir}/file-audit/queue.json exists:
Resume from last batch position
else:
# Discover files
git ls-files --cached --others --exclude-standard
# Read project memory (files per project-memory-reference.md Memory Files section)
Read {memory_dir}/PROJECT.md, {memory_dir}/TODO.md, {memory_dir}/LESSONS.md (if exist)
# Create queue
Initialize {memory_dir}/file-audit/queue.json with all files as "pending"
For large projects (20+ files), use /map-reduce for structured parallel analysis:
IF total_files > 20:
Invoke /map-reduce with by-directory or by-batch split:
- Mapper prompt = single-file analyzer prompt (adapted for multi-file chunks)
- Reducer = post-analysis deduplication + cross-referencing (see Phase 3)
- Each ChunkAssignment contains: list of files for that chunk, analyzer instructions,
cross-reference manifest of exported symbols from other chunks
- ChunkResults contain findings with confidence: "verified" or "chunk-local"
- Reducer cross-validates chunk-local findings before producing final output
- Fallback: if /map-reduce skill is unavailable, proceed with direct approach below
IF total_files <= 20 (or /map-reduce unavailable):
REPEAT until queue empty:
1. Get next batch of 3-5 "pending" files
2. Mark batch as "in_progress", save queue
3. Spawn 3-5 agents IN PARALLEL:
Agent(
subagent_type="general-purpose",
prompt=analyzer_prompt(file_path, project_memory)
)
4. Wait for all agents to complete
5. Collect result JSONs + extracted patterns
6. Mark batch as "completed", save queue
deprecated_api./deep-research invocation (via the Skill tool) in External mode targeting: "Evaluate migration paths from [deprecated library/API] to current alternatives."1. Collect ALL patterns from ALL file results
2. Build hash → pattern index
3. For each hash with multiple occurrences:
- Create duplicate issue
- Reference all locations
4. Inject duplicate issues into relevant file entries
1. Assemble master inventory.json
2. Generate summary statistics:
- Total files analyzed
- Issues by category (unused_code, incorrect_usage, duplication, documentation_drift)
- Issues by diagnostic level (error, warning, info)
3. Generate consolidated TODO list
4. Write {memory_dir}/file-audit/inventory.json
When spawning a file analyzer agent, use this prompt structure:
You are a single-file code analyzer. Analyze the following file deeply and return structured JSON.
## File to Analyze
Path: {file_path}
Language: {language}
## Project Context
Dependencies: {project_dependencies}
Project Memory:
- PROJECT.md: {project_md_summary}
- TODO.md: {todo_md_summary}
- LESSONS.md: {lessons_md_summary}
## Your Task
1. **Read the file** completely
2. **LSP Symbol Analysis**
- Use `LSP(operation="documentSymbol", filePath="{file_path}", line=1, character=1)` to enumerate symbols
- For each symbol:
- `hover` for type info
- `findReferences` to check usage (zero refs = potentially unused)
- `outgoingCalls` to map dependencies
3. **Dependency Extraction**
- Parse imports/requires
- Categorize: project file | external library | stdlib
4. **Library Usage Validation** (for external libraries)
- Use Context7 `resolve-library-id` then `query-docs`
- Check for: deprecated APIs, wrong signatures, missing error handling
5. **Documentation Drift Check**
- Compare code behavior vs PROJECT.md claims
- Flag mismatches
6. **Pattern Extraction**
- Extract functions >5 lines, regexes, magic constants
- Normalize (strip comments, whitespace)
- Hash each pattern
7. **Return JSON** in this exact format:
```json
{
"path": "{file_path}",
"purpose": "One-sentence description of what this file does",
"analyzed_at": "ISO timestamp",
"symbols": [
{
"name": "function_name",
"type": "function|class|variable",
"signature": "type signature if available",
"line": 15,
"used_by": ["path:line", ...],
"calls": ["path:function", ...]
}
],
"external_dependencies": [
{
"library": "library_name",
"functions_used": ["func1", "func2"],
"usage_assessment": "correct|deprecated_api|wrong_signature",
"notes": "explanation if issue"
}
],
"issues": [
{
"type": "unused_code|incorrect_usage|documentation_drift",
"subtype": "unreferenced_function|deprecated_api|code_doc_mismatch|...",
"diagnostic_level": "error|warning|info",
"location": {"line": 45, "end_line": 67},
"description": "Human-readable description",
"evidence": "What evidence supports this (LSP output, Context7 docs, etc)",
"suggested_fix": "How to fix this"
}
],
"patterns": [
{
"hash": "sha256_first_12_chars",
"name": "descriptive_name",
"type": "function|regex|constant",
"location": {"line": 23, "end_line": 35},
"normalized_content": "normalized code for comparison"
}
]
}
IMPORTANT:
---
## Output Files
### {memory_dir}/file-audit/queue.json
```json
{
"status": "in_progress|completed",
"total_files": 150,
"completed": 45,
"current_batch": ["src/api/routes.py", "src/auth/login.py"],
"started_at": "2026-01-10T14:00:00Z",
"files": [
{"path": "src/auth/login.py", "status": "completed", "analyzed_at": "..."},
{"path": "src/api/routes.py", "status": "in_progress"},
{"path": "src/utils/helpers.py", "status": "pending"}
]
}
{
"project": "project-name",
"analyzed_at": "2026-01-10",
"summary": {
"total_files": 150,
"total_symbols": 1234,
"issues_by_diagnostic_level": {"error": 5, "warning": 23, "info": 45},
"issues_by_type": {
"unused_code": 12,
"incorrect_usage": 8,
"duplication": 15,
"documentation_drift": 3
}
},
"files": [
{ "...file analysis results..." }
],
"pattern_registry": {
"abc123": {
"name": "email_validation_regex",
"occurrences": ["src/auth/login.py:23", "src/auth/register.py:34"]
}
},
"todos": [
{
"file": "src/auth/login.py",
"issue_type": "unused_code",
"action": "Remove `legacy_auth` function",
"classification": "needs-fix"
}
]
}
unreferenced_function: Function has zero references (LSP findReferences empty)unreferenced_variable: Variable assigned but never readdead_import: Import statement not used in filedeprecated_api: Using deprecated function/method (Context7 flagged)wrong_signature: Incorrect parameters passed (Context7 mismatch)missing_error_handling: Function raises but not caughtexact_duplicate: Identical code (content hash match)near_duplicate: Very similar code (structural hash + >80% similarity)code_doc_mismatch: Code behavior differs from PROJECT.md claimsmissing_feature: Documented feature not implemented in codeundocumented_feature: Significant code without documentation| Diagnostic Level | Criteria | Classification |
|---|---|---|
| error | Broken code, security vulnerability | Requires immediate action (needs-fix) |
| warning | Deprecated API, unused code, duplicates | Requires action (needs-fix) |
| info | Style issue, minor optimization | Review and decide (needs-input if architectural, needs-fix if stylistic) |
After writing inventory.json, verify no findings were lost during consolidation. Count the total
issues discovered across all analyzed files (from per-file analysis results) and compare against
the total todos in the inventory's todos array. If the todo count is less than the discovered
issue count, findings were dropped during consolidation - investigate and restore them before
proceeding. This is the same principle as the Reconcile step in pr-review and plan-review.
After writing inventory.json, check the generated TODO list for items classified as needs-input.
If any exist, present them to the user before exiting. Do NOT exit with unresolved needs-input
items.
Present each needs-input item individually via AskUserQuestion. Each item gets its own
question with full context. Batch up to 4 per call:
AskUserQuestion(questions=[
{
"question": "[{file}:{issue_type}] {action}\n\nDiagnostic: {diagnostic_level}\nDecision needed: {input_needed}\n▸dp:file={file},line=0,cat={issue_type},skill=file-audit",
"header": "{file}",
"options": [
... (map each element from the finding's `options` array to {label, description}),
{"label": "Defer", "description": "Skip for now — user-deferred"}
],
"multiSelect": false
},
... (one question per item, up to 4 per call)
])
When options is null (findings from pipelines without a verifier), fall back to the binary:
[{"label": "Fix"}, {"label": "Defer"}].
File-audit has no Finding Verifier — the Lead applies the de-escalation test from
code-quality/references/finding-classification.md inline before presenting to the user.
If the finding has a single correct resolution, reclassify to needs-fix and fix it.
If more than 4 needs-input items exist, make multiple AskUserQuestion calls.
For each needs-input TODO:
needs-fix in inventory.json.
Record the selected option label in the finding's suggested_fix field.user-deferred in inventory.json.If zero needs-input TODOs exist, skip this step. If AskUserQuestion is unavailable, treat
needs-input items as needs_context in the inventory (surface them, don't hide them).
When /file-audit --resume is invoked:
{memory_dir}/file-audit/queue.jsonstatus: "pending" or status: "in_progress"in_progress to pending (they were interrupted)When LSP is unavailable for a file type:
"analysis_method": "regex_fallback""type_info_available": false/file-audit --status to monitor progressThe analyzer reads project memory files (detected per code-quality/references/project-memory-reference.md) to:
PROJECT.md describes architecture and decisionsTODO.md shows active workLESSONS.md provides principle-level context for the projectThis enables detection of documentation_drift issues that pure code analysis would miss.