From formal-verify
Perform comprehensive, deep analysis of a system and its subsystems to identify bugs, race conditions, stale documentation, dead code, and correctness issues. Use when asked to "audit this system", "exhaustive analysis of X", "analyze for correctness", "root out issues in...", "deep dive into...", "verify this code is correct", "find bugs in...", or when reviewing agent-written code for production readiness. Automatically decomposes systems into subsystems, applies appropriate analysis checklists, and produces structured findings with severity classification.
npx claudepluginhub petekp/agent-skills --plugin literate-guideThis skill uses the workspace's default tool permissions.
Systematic audit methodology for rooting out latent issues in codebases, particularly agent-written code that needs verification before production use.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Systematic audit methodology for rooting out latent issues in codebases, particularly agent-written code that needs verification before production use.
Before analysis, map the system's subsystems. Auto-discover by:
Output a subsystem table:
| # | Subsystem | Files | Side Effects | Priority |
|---|-----------|-------|--------------|----------|
| 1 | Lock System | lock.rs | FS: mkdir, rm | High |
| 2 | API Layer | api/*.rs | Network, DB | High |
| 3 | Config Parser | config.rs | FS: read | Medium |
Priority heuristics:
Analyze subsystems in priority order. For large codebases (>5 subsystems or >3000 LOC per subsystem), prefer clearing context between subsystems to prevent analysis drift.
For each subsystem, apply the appropriate checklist based on subsystem type.
After all subsystems analyzed:
Select checklist based on subsystem characteristics. Apply multiple if applicable.
| Check | Question |
|---|---|
| Correctness | Does code do what documentation claims? |
| Atomicity | Can partial writes corrupt state? |
| Race conditions | Can concurrent access cause inconsistency? |
| Cleanup | Are resources released on all exit paths (success, error, panic)? |
| Error recovery | Do failures leave the system in a valid state? |
| Stale documentation | Do comments match actual behavior? |
| Dead code | Are there unused code paths that could confuse maintainers? |
| Check | Question |
|---|---|
| Input validation | Are all inputs validated before use? |
| Error responses | Do errors leak internal details? |
| Timeout handling | Are network operations bounded? |
| Retry safety | Are operations idempotent or properly guarded? |
| Authentication | Are auth checks applied consistently? |
| Rate limiting | Can the API be abused? |
| Serialization | Can malformed payloads cause panics? |
| Check | Question |
|---|---|
| Deadlock potential | Can lock acquisition order cause deadlock? |
| Data races | Is shared mutable state properly synchronized? |
| Starvation | Can any task be indefinitely blocked? |
| Cancellation | Are cancellation/shutdown paths clean? |
| Resource leaks | Are spawned tasks/threads joined or detached properly? |
| Panic propagation | Do panics in tasks crash the whole system? |
| Check | Question |
|---|---|
| State consistency | Can UI show stale or inconsistent state? |
| Error states | Are all error conditions rendered appropriately? |
| Loading states | Are async operations properly indicated? |
| Accessibility | Are interactions keyboard/screen-reader accessible? |
| Memory leaks | Are subscriptions/observers cleaned up? |
| Re-render efficiency | Are unnecessary re-renders avoided? |
| Check | Question |
|---|---|
| Edge cases | Are empty, null, and boundary values handled? |
| Type coercion | Are implicit conversions safe? |
| Overflow/underflow | Are numeric operations bounded? |
| Encoding | Is text encoding handled consistently (UTF-8)? |
| Injection | Can untrusted input escape its context? |
| Invariants | Are data invariants enforced and documented? |
| Check | Question |
|---|---|
| Defaults | Are defaults safe and documented? |
| Validation | Are invalid configs rejected early with clear errors? |
| Secrets | Are secrets handled securely (not logged, not in VCS)? |
| Hot reload | If supported, is reload atomic and safe? |
| Compatibility | Are breaking changes versioned or migrated? |
Classify every finding. Assume user will fix all issues soon.
| Severity | Criteria | Examples |
|---|---|---|
| Critical | Data loss, security vulnerability, crash in production | Unhandled panic, SQL injection, file corruption |
| High | Incorrect behavior users will notice | Wrong calculation, race causing wrong UI state, timeout too short |
| Medium | Technical debt that causes confusion or future bugs | Stale docs, misleading names, redundant code paths |
| Low | Cosmetic or minor improvements | Unused parameter, suboptimal algorithm (works correctly) |
Every finding must follow this structure:
### [SUBSYSTEM] Finding N: Brief Title
**Severity:** Critical | High | Medium | Low
**Type:** Bug | Race condition | Security | Stale docs | Dead code | Design flaw
**Location:** `file.rs:line_range` or `file.rs:function_name`
**Problem:**
What's wrong and why it matters. Be specific.
**Evidence:**
Code snippet or reasoning demonstrating the issue.
**Recommendation:**
Specific fix. Include code if helpful.
Adapt output to project organization. Common patterns:
.claude/docs/audit/
├── 00-analysis-plan.md # Subsystem table, priorities, methodology
├── 01-subsystem-name.md # Individual analysis
├── 02-another-subsystem.md
└── SUMMARY.md # Consolidated findings, action items
.claude/docs/audit/system-name-audit.md
# Contains: plan, all findings, summary
If project has existing docs/ or similar, place audit artifacts there.
Always create a summary with:
For thorough analysis:
When clearing context, document progress in the analysis plan file so the next session can continue.
Before deep analysis, scan for documented issues:
Add these as starting hypotheses—verify or refute during analysis.
| Anti-Pattern | Why It's Bad | Instead |
|---|---|---|
| Skimming code | Misses subtle bugs | Read every line in scope |
| Assuming correctness | Agent code often has edge case bugs | Verify each code path |
| Vague findings | "This looks wrong" isn't actionable | Cite specific lines, explain why |
| Over-scoping | Analysis paralysis | Strict subsystem boundaries |
| Ignoring tests | Tests reveal assumptions | Read tests to understand intent |
Analysis is complete when: