Comprehensive codebase audit. Spawns specialized reviewers in parallel against a scoped portion of the codebase, consolidates findings, and generates an actionable report.
Spawns parallel reviewers to audit codebases for security, performance, and architecture issues.
/plugin marketplace add howells/arc/plugin install arc@howells-arcThis skill inherits all available tools. When active, it can use any tool Claude has access to.
<required_reading> Read these reference files NOW:
<rules_context> Check for project coding rules:
Use Glob tool: .ruler/*.md
If .ruler/ exists, detect stack and read relevant rules:
| Check | Read from .ruler/ |
|---|---|
| Always | code-style.md |
next.config.* exists | nextjs.md |
react in package.json | react.md |
tailwindcss in package.json | tailwind.md |
.ts or .tsx files | typescript.md |
vitest or jest in package.json | testing.md |
Pass relevant rules to each reviewer agent.
If .ruler/ doesn't exist: Continue without rules — they're optional.
</rules_context>
Parse arguments:
$ARGUMENTS may contain:
apps/web, packages/ui, src/)--security, --performance, --architecture)apps/web --security)If no scope provided:
Use Glob tool to detect structure:
apps/*, packages/* → monorepo (audit both)src/* → standard (audit src/)Detect project type with Glob + Grep:
| Check | Tool | Pattern |
|---|---|---|
| Next.js | Grep | "next" in package.json |
| React | Grep | "react" in package.json |
| Python | Glob | requirements.txt, pyproject.toml |
| Rust | Glob | Cargo.toml |
| Go | Glob | go.mod |
Check for database/migrations:
Use Glob tool: prisma/*, drizzle/*, migrations/* → has-db
Summarize detection:
Scope: [path or "full codebase"]
Project type: [Next.js / React / Python / etc.]
Has database: [yes/no]
Coding rules: [yes/no]
Focus: [all / security / performance / architecture]
Base reviewer selection by project type:
| Project Type | Core Reviewers |
|---|---|
| Next.js | security-sentinel, performance-oracle, architecture-strategist, lee-nextjs-reviewer, daniel-product-engineer-reviewer |
| React/TypeScript | security-sentinel, performance-oracle, architecture-strategist, daniel-product-engineer-reviewer, senior-reviewer |
| Python | security-sentinel, performance-oracle, architecture-strategist, senior-reviewer |
| Rust/Go | security-sentinel, performance-oracle, architecture-strategist, senior-reviewer |
| General | security-sentinel, performance-oracle, architecture-strategist, senior-reviewer |
Conditional additions:
data-integrity-guardiancode-simplicity-reviewerFocus flag overrides:
--security → only security-sentinel--performance → only performance-oracle--architecture → only architecture-strategistFinal reviewer list: Select 4-6 reviewers based on context.
Read agent prompts: For each selected reviewer, read:
${CLAUDE_PLUGIN_ROOT}/agents/review/[reviewer-name].md
Spawn reviewers in parallel:
Task [security-sentinel] model: sonnet: "
Audit the following codebase for security issues.
Scope: [path]
Project type: [type]
Coding rules: [rules content if any]
Focus on: OWASP top 10, authentication/authorization, input validation, secrets handling, injection vulnerabilities.
Return findings in this format:
## Findings
### Critical
- [file:line] Issue description
### High
- [file:line] Issue description
### Medium
- [file:line] Issue description
### Low
- [file:line] Issue description
## Summary
[1-2 sentences]
"
Task [performance-oracle] model: sonnet: "
Audit the following codebase for performance issues.
[similar structure]
Focus on: N+1 queries, missing indexes, memory leaks, bundle size, render performance.
"
Task [architecture-strategist] model: sonnet: "
Audit the following codebase for architectural issues.
[similar structure]
Focus on: Component boundaries, coupling, abstraction levels, scalability concerns.
"
[Additional reviewers as selected...]
Wait for all agents to complete.
Collect all agent outputs.
Deduplicate:
Categorize by severity:
Group by domain:
Create audit report:
mkdir -p docs/audits
File: docs/audits/YYYY-MM-DD-[scope-slug]-audit.md
# Audit Report: [scope]
**Date:** YYYY-MM-DD
**Reviewers:** [list of agents used]
**Scope:** [path or "full codebase"]
**Project Type:** [detected type]
## Executive Summary
[1-2 paragraph overview of findings]
- **Critical:** X issues
- **High:** X issues
- **Medium:** X issues
- **Low:** X issues
## Critical Issues
> Immediate action required
### [Issue Title]
**File:** `path/to/file.ts:123`
**Flagged by:** security-sentinel, architecture-strategist
**Description:** [What's wrong and why it matters]
**Recommendation:** [How to fix]
[Repeat for each critical issue]
## High Priority
> Should fix soon
[Same format as Critical]
## Medium Priority
> Technical debt
[Same format]
## Low Priority / Suggestions
> Nice to have
[Same format]
---
## Domain Breakdown
### Security
[Summary of security findings]
### Performance
[Summary of performance findings]
### Architecture
[Summary of architecture findings]
### Code Quality
[Summary of code quality findings]
### UI/UX
[Summary of UI/UX findings, if applicable]
### Data Integrity
[Summary of data integrity findings, if applicable]
---
## Next Steps
1. [Prioritized action item]
2. [Prioritized action item]
3. [Prioritized action item]
Commit the report:
git add docs/audits/
git commit -m "docs: add audit report for [scope]"
Show summary to user:
## Audit Complete
Reviewed: [scope]
Reviewers: [count] agents
Report: docs/audits/YYYY-MM-DD-[scope]-audit.md
### Summary
- Critical: X
- High: X
- Medium: X
- Low: X
### Top Issues
1. [Critical issue 1]
2. [Critical issue 2]
3. [High issue 1]
Offer next steps:
What would you like to do?
1. **Create tasks from findings** → Add critical/high issues to /arc:tasklist
2. **Focus on critical issues** → Create implementation plan for critical fixes
3. **Deep dive on [domain]** → Explore specific domain findings
4. **Done for now** → End session
If user selects:
docs/tasklist.md/arc:detail with critical issues as scope<progress_append> After completing the audit, append to progress journal:
## YYYY-MM-DD HH:MM — /arc:audit
**Task:** Audit [scope]
**Outcome:** Complete
**Files:** docs/audits/YYYY-MM-DD-[scope]-audit.md
**Decisions:**
- Critical: [N] issues
- High: [N] issues
- Reviewers: [list]
**Next:** [Create tasks / Focus on critical / Done]
---
</progress_append>
<success_criteria> Audit is complete when:
docs/audits/Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Activates when the user asks about Agent Skills, wants to find reusable AI capabilities, needs to install skills, or mentions skills for Claude. Use for discovering, retrieving, and installing skills.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.