From tac
Audits codebases for 12 leverage points of agentic coding including context, tests, docs, logging, and architecture. Identifies gaps and prioritizes fixes for AI agent success.
npx claudepluginhub melodic-software/claude-code-plugins --plugin tacThis skill is limited to using the following tools:
Audit a codebase against the 12 leverage points framework to identify gaps and improve agentic coding success.
Performs 12-Factor Agents compliance analysis on codebases for agent architecture evaluation, LLM-powered systems review, and agentic app audits.
Audits codebase for agentic layer maturity: checks .claude/commands, specs, adws, hooks, agents, trees; computes coverage score and identifies gaps.
Audits Claude Code agents for violations, gaps, and improvements across 7 dimensions like description quality and frontmatter, outputting structured repair plans.
Share bugs, ideas, or general feedback.
Audit a codebase against the 12 leverage points framework to identify gaps and improve agentic coding success.
CLAUDE.md presence:
Search for: CLAUDE.md, .claude/CLAUDE.md
Check: Does it explain the project? Conventions? Common commands?
README.md quality:
Search for: README.md
Check: Does it explain structure? How to run? How to test?
Permissions configuration:
Search for: .claude/settings.json
Check: Are required tools allowed?
Standard out patterns:
Search for: print(, console.log(, logger., logging.
Check: Are success AND error cases logged?
Check: Can agent see what's happening?
Anti-pattern detection:
Look for: Silent returns, bare except blocks, empty catch blocks
These prevent agent visibility.
Type definitions:
Search for: interface, type, class, BaseModel, dataclass
Check: Are names information-dense? (Good: UserAuthToken, Bad: Data)
Internal docs:
Search for: *.md files, docstrings, comments
Check: Do they explain WHY, not just WHAT?
Test presence:
Search for: test_*.py, *.test.ts, *.spec.ts, *_test.go
Check: Do tests exist? Are they comprehensive?
Test commands:
Check: Is there a simple test command? (npm test, pytest, etc.)
Check: Do tests run quickly?
Entry points:
Check: Are entry points obvious? (main.py, index.ts, server.py)
File organization:
Check: Consistent structure? Related files grouped?
Check: File sizes reasonable? (< 1000 lines)
Slash commands:
Search for: .claude/commands/
Check: Are common workflows automated?
Automation:
Search for: GitHub Actions, hooks, triggers
Check: Are workflows automated?
After audit, provide:
| Leverage Point | Status | Priority | Recommendation |
|---|---|---|---|
| Context | Good/Fair/Poor | High/Med/Low | Specific action |
| ... | ... | ... | ... |
List top 3-5 improvements in order of impact:
For each leverage point:
## Leverage Point Audit Results
### Summary
- Tests: POOR (no test files found) - HIGHEST PRIORITY
- Standard Out: FAIR (some logging, missing error cases)
- Architecture: GOOD (clear structure, reasonable file sizes)
### Priority Actions
1. Add test suite - enables self-correction
2. Add error logging to API endpoints - enables visibility
3. Create /prime command - enables quick context
### Detailed Findings
[... specific recommendations ...]
Date: 2025-12-26 Model: claude-opus-4-5-20251101