From theclauu
Use for codebase audits across multiple concerns — security, tech-debt, repo-health, docs, design, access paths, data models, frontend perf, cache, and CSO-style security review. Pick the lens that fits the question.
npx claudepluginhub artemis-xyz/theclauu --plugin theclauuThis skill uses the workspace's default tool permissions.
Lens-based auditing. Pick the lens matching the concern:
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Lens-based auditing. Pick the lens matching the concern:
========================================
Scan the codebase for security vulnerabilities, present findings by severity, and generate phased remediation plans with branch/build/PR workflow.
Persona: Application security engineer — thorough, methodical, and pragmatic. Prioritize exploitable vulnerabilities over theoretical risks. Never print secret values.
Parse $ARGUMENTS at invocation:
--auto: Fully non-interactive. Implies --output github. Scans, creates issues, returns summary. See orchestration guide Section 10.--output github: Write findings and remediation plans as GitHub Issues. See output guide (~/.claude/skills/_shared/output-guide.md).--output session: Present findings in chat only, no persistence.auth, api/, dependencies). If provided, scope the scan to that area./tech-debt/frontend-performance-audit/investigate-appFollow these steps exactly in order.
Enter Plan Mode. Call EnterPlanMode to enter deliberation mode. All discovery, analysis, and proposal steps are read-only — plan mode enforces this by disabling write tools. If the user declines plan mode, proceed normally — the deliberation steps are still read-only by convention.
Scan the codebase across 8 categories (A-H). Follow scan-categories.md for the full checklist of each category's checks, tools, and grep patterns.
Present findings using the severity system and table format defined in severity-definitions.md.
Present the findings table and ask:
"Here are the security findings. Would you like me to generate remediation plans? I'll group related fixes into PRs."
Do NOT proceed to Phase 2 without explicit confirmation.
Exit Plan Mode. Call ExitPlanMode to transition to execution mode. The deliberation phase is complete — doc generation requires the Write tool.
Output location:
documentation/planning/security/<session_name>_<YYYY-MM-DD>/
├── 00_SECURITY_AUDIT.md
├── 01_<remediation-slug>.md
├── 02_<remediation-slug>.md
└── ...
Archive convention: See orchestration guide, Section 8.
Ask the user for a short session name, or derive one (e.g., api-security, full-audit).
Master audit document containing:
Group related findings into single PRs where sensible. For example:
Each doc represents exactly 1 PR and must include:
Follow Section 9 of the orchestration guide (~/.claude/skills/_shared/orchestration-guide.md). Scratch directory: /tmp/security-audit-<YYYY-MM-DD_HHMMSS>/research/. Plan agents read research from this directory.
Security-specific rule: All subagents MUST mask secret values in research files and docs. Show API_KEY=sk-**** — never the full value.
After generating all remediation docs, present a final summary:
Security Audit Summary
═══════════════════════════════════════════════════════════════════════════
Session: [name]
Date: [date]
Scope: [what was audited]
Findings: [total] ([critical] critical, [high] high, [medium] medium, [low] low)
Remediation Plans:
01 [title] [CRITICAL] [effort]
02 [title] [HIGH] [effort]
03 [title] [MEDIUM] [effort]
Ongoing Recommendations:
- Add `npm audit` to CI pipeline
- Set up Dependabot / Renovate for automated dependency updates
- Add pre-commit hook for secret scanning (e.g., gitleaks, detect-secrets)
- Schedule quarterly security audits
- Consider SAST tooling (Semgrep, CodeQL) for continuous scanning
Audit docs: documentation/planning/security/<session>/
═══════════════════════════════════════════════════════════════════════════
Then tell the user:
"Plans are ready. Run /implement-plan documentation/planning/security/<session>/ to start building — it will handle challenge review, branching, implementation, and PRs for each phase doc."
This skill produces plans, not code. Implementation is always handled by /implement-plan, which provides its own challenge round, verification, and PR workflow. Do NOT build, branch, or create PRs from this skill.
severity-definitions.md — don't inflate or deflate.pip-audit isn't installed, note it and move on. Don't block the audit.~/.claude/skills/_shared/orchestration-guide.md. Context never flows through the orchestrator.This skill supports --output github and --output session in addition to the default docs target.
Follow the output guide at ~/.claude/skills/_shared/output-guide.md:
github: use the structured issue body format (Section 4), check for duplicates (Section 4.5), apply labels (Section 4.3). Map scan severities: CRITICAL → priority:critical, HIGH → priority:high, MEDIUM → priority:medium, LOW → priority:low.session: present findings in chat, stay in Plan Mode (Section 5)docs (default): follow the subagent workflow in the orchestration guideAfter creating issues, present the batch summary and return issue URLs for audit tracking.
When --auto is set (see orchestration guide Section 10):
$ARGUMENTS as scope. If none provided, scan full codebase.sk-****.========================================
Find, report, and plan remediation of technical debt in the current codebase.
Parse $ARGUMENTS at invocation:
--auto: Fully non-interactive. Implies --output github. Scans, creates issues, returns summary. See orchestration guide Section 10.--output github: Write findings and plans as GitHub Issues. See output guide (~/.claude/skills/_shared/output-guide.md).--output session: Present findings in chat only, no persistence.src/api/ or auth module). If provided, scope the scan to that area instead of the full codebase./security-audit/frontend-performance-audit/product-enhanceEnter Plan Mode. Call EnterPlanMode to enter deliberation mode. All discovery, analysis, and proposal steps are read-only — plan mode enforces this by disabling write tools. If the user declines plan mode, proceed normally — the deliberation steps are still read-only by convention.
Scan the codebase for common technical debt patterns:
Do NOT read CLAUDE.md or MEMORY.md directly — Claude already has both in its system prompt. Use the system prompt context for project understanding; focus scan effort on code patterns and structure.
Detect the project language from the codebase (look at file extensions, package.json, requirements.txt, Cargo.toml, go.mod, etc.). Use the Glob tool with appropriate patterns (e.g., **/*.py, **/*.ts, **/*.go).
Look for:
Use the Grep tool with pattern TODO|FIXME|HACK|XXX, glob *.{py,js,ts}, output_mode: content, and head_limit: 30.
Use the Glob tool with the detected language patterns to find source files, then run wc -l on the results to identify large files.
Files over 300 lines may need splitting.
Look for functions with:
Compare source files to test files - flag untested modules.
Detect the package manager and run the appropriate command:
pip list --outdatednpm outdated or check package.jsoncargo outdated (if installed)go.mod for old versionsSkip gracefully if the tool isn't installed.
Provide a prioritized list:
For each item, suggest a specific fix.
After presenting the scan results, ask the user to confirm whether they want to generate the full tech debt documentation and phased remediation plans. Do NOT proceed without explicit confirmation.
Ask: "Would you like me to generate detailed tech debt documentation and phased remediation plans?"
Exit Plan Mode. Call ExitPlanMode to transition to execution mode. The deliberation phase is complete — doc generation requires the Write tool.
If yes, ask the user for a short session name (e.g., api-cleanup, db-layer) or derive one from the focus area. All output goes into:
documentation/planning/tech_debt/<session_name>_<YYYY-MM-DD>/
├── 00_TECH_DEBT.md
├── 01_<remediation-slug>.md
├── 02_<remediation-slug>.md
└── ...
Archive convention: See orchestration guide, Section 8.
Create 00_TECH_DEBT.md in the session directory containing:
Create individual plan documents prefixed by execution order: 01-*.md, 02-*.md, etc.
Each plan document represents exactly 1 PR and must include:
Follow Section 9 of the orchestration guide (~/.claude/skills/_shared/orchestration-guide.md). Scratch directory: /tmp/tech-debt-<YYYY-MM-DD_HHMMSS>/research/. Plan agents read research from this directory.
This skill supports --output github and --output session in addition to the default docs target.
Follow the output guide at ~/.claude/skills/_shared/output-guide.md:
github: use the structured issue body format (Section 4), check for duplicates (Section 4.5), apply labels (Section 4.3). Map scan priorities: High → priority:high, Medium → priority:medium, Low → priority:low.session: present findings in chat, stay in Plan Mode (Section 5)docs (default): follow the subagent workflow in the orchestration guideAfter creating issues, present the batch summary and return issue URLs for audit tracking.
When --auto is set (see orchestration guide Section 10):
--output github$ARGUMENTS as scope. If none provided, scan full codebase but limit to top 10 findings.--output github) for all findings above LOW severity========================================
Birds-eye view across multiple repositories. Scan for open PRs, CI status, stale branches, pending plan docs, and uncommitted work — then present a single dashboard.
Reference: health-checks.md — detailed check definitions, commands, and example output formats.
Follow these steps exactly in order.
Ask the user how to find repos:
~/Projects). Use Glob with */.git to find repos one level deep.~/.claude/notes/repo-health-repos.txt for a previously saved list.Present discovered repos and ask the user to confirm. Offer to save the list for next time.
If using a saved list, validate each path exists and contains .git/. Report dead entries and ask: "Remove the dead entries from your saved list?"
Launch one Explore subagent per repo for parallel scanning. Gather per-repo data points (see health-checks.md for commands):
Current branch, working tree status, open PRs (mine), PRs to review, CI status, stale branches (14+ days), in-progress plans, pending plans, last commit, stash count.
Show a compact summary table — one line per repo with branch, tree status, PR count, review count, CI status. Include a totals row. Then expand with a Needs Attention section for repos with issues. See health-checks.md for detail format and examples.
Suggest a priority order: CI failures > PRs needing your review > PRs with changes requested > uncommitted changes > in-progress plans > stale branches.
Present as a short numbered list with specific actions. Ask: "Which repo do you want to start with?"
After the main dashboard, run a quick hygiene scan. Skip entirely if everything is clean. Check three areas (see health-checks.md for criteria and output formats):
documentation/planning/. Offer to archive, delete, or skip each.~/.local/share/claudefather/backups/. Offer to prune backups older than 30 days.~/.claude/notes/projects/ whose repos no longer exist. Offer to remove.Once the user picks a repo, suggest the relevant skill (/context-resume, /review-pr, /implement-plan). If it's a different directory, tell the user to switch there first.
~/.claude/notes/repo-health-repos.txt — one path per line, never synced.gh is unavailable; note "no CI data."gh and an authenticated session.========================================
Rigorously audit project documentation against the actual codebase. Update inaccuracies, mark development plan statuses, archive stale docs, and identify gaps — ensuring full handoff-readiness.
Framing principle: This project is being handed off to another engineering team. There must be zero gaps. Any engineer who picks up the codebase should be able to fully understand it, have complete context, and confidently edit and enhance the codebase without asking the original team a single question.
Parse $ARGUMENTS at invocation:
--auto: Fully non-interactive. Auto-fixes stale docs, creates GitHub Issues for gaps (implies --output github). See orchestration guide Section 10.--output github: Create GitHub Issues for documentation gaps. See output guide (~/.claude/skills/_shared/output-guide.md).--output session: Present findings in chat only, no persistence.docs/, documentation/). If provided, skip asking in Step 1./tech-debt/security-auditFollow these steps exactly in order.
Ask the user:
./documentation, ./docs)If the user chooses a global review, search for:
*.md files in the repodocs/, documentation/, doc/Automatically detect and exclude archive/legacy folders from the active review set:
archive/, legacy/, .archive/, old/, deprecated/Find all documentation files within scope. Categorize each doc:
| Category | Description |
|---|---|
| Coding overview | Describes architecture, systems, code structure, APIs |
| Development plan | Describes phases of work, roadmaps, feature plans |
| Guide/runbook | How-to, setup, operational procedures |
| Reference | API docs, config references, schemas, changelogs |
| Other | Anything that doesn't fit above |
Present the inventory as a table:
Documentation Inventory
═══════════════════════════════════════════════════════
File Category Modified
docs/architecture.md Coding overview 2025-01-15
docs/roadmap.md Development plan 2025-02-01
README.md Guide/runbook 2025-02-10
CHANGELOG.md Reference 2025-02-10
═══════════════════════════════════════════════════════
4 files found (1 overview, 1 plan, 1 guide, 1 reference)
Ask the user to confirm the inventory and categories before proceeding. The user may exclude files or recategorize.
For each document categorized as Coding overview:
Explore type) to trace every claim back to real code:
Important: Only modify documentation files. Never touch code files.
For each document categorized as Development plan:
| Status | Meaning |
|---|---|
✅ COMPLETED | Code exists, feature is implemented |
🔧 IN PROGRESS | Partial implementation exists |
📋 PENDING | No implementation found |
git mv so the rename is tracked in history (ask user for preferred location, default to archive/ in the doc's parent directory)mkdir -p if it doesn't existAfter reviewing all docs, identify:
Present findings as a list and ask the user which gaps to address:
Execute the user's choices — create or update docs as requested.
Print a final report:
Documentation Review Complete
═══════════════════════════════════════════════════════
Files reviewed: N total
Coding overview: N (N updated)
Development plan: N (N updated, N archived)
Guide/runbook: N (N updated)
Reference: N
Other: N
Changes made:
- docs/architecture.md: Fixed 3 file paths, added new API section
- docs/roadmap.md: Marked Phase 1 ✅, Phase 2 🔧, Phase 3 📋
- docs/old-plan.md: Archived → archive/old-plan.md (fully completed)
Gaps identified:
- No documentation for the auth middleware module
- Missing onboarding guide for local development setup
- 2 broken cross-references fixed
Handoff readiness: [assessment]
Could a new engineer onboard from these docs alone?
═══════════════════════════════════════════════════════
archive/, legacy/, .archive/, or asks the user.This skill supports --output github and --output session in addition to the default inline-fix behavior.
Follow the output guide at ~/.claude/skills/_shared/output-guide.md:
github: use the structured issue body format (Section 4), check for duplicates (Section 4.5), apply labels (Section 4.3). Apply docs label. Auto-fix verifiable inaccuracies first, then create issues for gaps requiring human judgment.session: present findings in chat, stay in Plan Mode (Section 5)After creating issues, present the batch summary and return issue URLs for audit tracking.
When --auto is set (see orchestration guide Section 10):
$ARGUMENTS or default to global review========================================
Design-literate PM bridging visual polish and engineering. Audit a deployed app, find design/UX gaps, produce phased PR-ready design docs.
Parse $ARGUMENTS at invocation:
--output github: activate GitHub Issues output mode. See output guide (~/.claude/skills/_shared/output-guide.md).--output session: present findings in chat only, no persistence./frontend-performance-audit/tech-debt/product-enhanceEnter Plan Mode (EnterPlanMode). Steps 1-5 are read-only. If declined, proceed by convention.
Ask: (1) deployed URL, (2) focus area, (3) anything to skip, (4) front-end stack. Scratch dir: /tmp/design-review-<YYYY-MM-DD_HHMMSS>/research/.
Parallel: A. Explore subagents (disk-write pattern, orchestration guide Section 2) for front-end structure, styling, state, APIs, tokens, components. B. Begin Step 2.
Present codebase context summary. Confirm with user.
Detect Chrome (which google-chrome, which chromium, test -x "/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" in parallel). Prefer local dev server -- check package.json for port, lsof if running, curl to verify; fall back to deployed URL.
Capture each page at 1440,900 / 768,1024 / 375,812. One screenshot per Bash call. No shell operators.
Classify pages: MARKETING (hero-driven), APP UI (data-dense), HYBRID (per-section rules). See design-hard-rules.md.
DOM Extraction (optional, claude-in-chrome MCP -- skip if unavailable):
Fonts: JSON.stringify([...new Set([...document.querySelectorAll('*')].slice(0,500).map(e => getComputedStyle(e).fontFamily))])
Colors: JSON.stringify([...new Set([...document.querySelectorAll('*')].slice(0,500).flatMap(e => [getComputedStyle(e).color, getComputedStyle(e).backgroundColor]).filter(c => c !== 'rgba(0, 0, 0, 0)'))])
Headings: JSON.stringify([...document.querySelectorAll('h1,h2,h3,h4,h5,h6')].map(h => ({tag:h.tagName, text:h.textContent.trim().slice(0,50), size:getComputedStyle(h).fontSize, weight:getComputedStyle(h).fontWeight})))
Touch targets: JSON.stringify([...document.querySelectorAll('a,button,input,[role=button]')].filter(e => {const r=e.getBoundingClientRect(); return r.width>0 && (r.width<44||r.height<44)}).map(e => ({tag:e.tagName, text:(e.textContent||'').trim().slice(0,30), w:Math.round(e.getBoundingClientRect().width), h:Math.round(e.getBoundingClientRect().height)})).slice(0,20))
Ask: "Here's what I see. Anything I should look at more closely?"
Ask these one group at a time, waiting for answers:
Compare intent (Step 3) vs. observations (Step 2) vs. code (Step 1). Explore subagents for targeted digs.
Subagents read from this directory: audit-checklist.md, ai-slop-blacklist.md, font-knowledge.md, design-hard-rules.md. Calibrate by page-type. Present gap analysis with dual scores (Design + AI Slop), referencing screenshots and code.
Ask: "Does this match your experience? Anything I missed or got wrong?"
Propose enhancements scored on Impact, Effort, Risk. Classify as SAFE (baseline fix users expect) or RISK (differentiation -- explain upside and downside). SAFE first, ranked by impact-to-effort. Reference screenshots, Step 3 intent, code.
Ask: "Which would you like me to design? Pick by number or adjust." Wait for selection. Call ExitPlanMode.
Output to documentation/planning/phases/<session_name>_<YYYY-MM-DD>/. Overview + numbered docs (1 PR each). Orchestration guide Section 9.
Phase docs include: header, context + screenshots, visual spec (exact before/after), dependencies, implementation plan, responsive behavior, accessibility checklist, test plan, verification, "What NOT To Do."
Tell user: "Run /implement-plan on the phase directory to start building." This skill produces plans, not code.
This skill supports --output github and --output session in addition to the default docs target.
Follow the output guide at ~/.claude/skills/_shared/output-guide.md:
github: use the structured issue body format (Section 4), check for duplicates (Section 4.5), apply labels (Section 4.3). Apply design label. Include SAFE vs RISK classification in issue body.session: present findings in chat, stay in Plan Mode (Section 5)docs (default): follow the subagent workflow in the orchestration guideAfter creating issues, present the batch summary and return issue URLs for audit tracking.
========================================
Audit whether a system's access paths consistently enforce cross-cutting concerns — and whether those concerns are placed at the correct architectural layer (transport vs. domain core).
Persona: Senior platform architect who evaluates systems holistically. Evidence-driven — every finding cites specific code paths. Distinguishes between genuine inconsistencies and appropriate per-transport differences.
Parse $ARGUMENTS at invocation:
--auto: Fully non-interactive. Implies --output github. Scans, creates issues, returns summary. See orchestration guide Section 10.--output github: Write findings and remediation plans as GitHub Issues. See output guide (~/.claude/skills/_shared/output-guide.md).--output session: Present findings in chat only, no persistence.auth, validation, api/). If provided, scope the audit to that area./security-audit/tech-debt/investigate-app/frontend-performance-auditNot every difference across access paths is a bug. A CLI having no auth is correct — it's a local operator tool. The real questions are:
| Phase | Steps | What happens | User gate? |
|---|---|---|---|
| 1: Scan | Steps 1–5 | Detect paths, parallel discovery, convergence analysis, classify findings, present summary | No |
| Gate | — | User confirms whether to generate remediation plans | Yes |
| 2: Remediation | — | Generate per-PR planning docs grouped by related findings | No |
| 3: Summary | — | Present final summary, hand off to /implement-plan | No |
Follow these steps exactly in order.
Enter Plan Mode. Call EnterPlanMode to enter deliberation mode. All discovery, analysis, and proposal steps are read-only — plan mode enforces this by disabling write tools. If the user declines plan mode, proceed normally — the deliberation steps are still read-only by convention.
Do NOT read CLAUDE.md or MEMORY.md — already in system prompt.
If no focus area was provided in $ARGUMENTS, ask: "Any specific concern or access path to focus on?" Default to full system scan if no scope given.
Detect the system's framework and access patterns. Run in parallel:
**/routes/**/*.py, **/api/**/*.py, **/controllers/**/*.{ts,js,py,rb,go} (HTTP frameworks)@app\.(get|post|put|delete|patch)|@router\.|@Controller|@RequestMapping|app\.(get|post|use)\( in source filesclick\.command|typer\.command|argparse|Commander|cobra\.Command (CLI frameworks)slack_bolt|slack_sdk|SlackBot|Bolt\( (Slack integrations)FastMCP|McpServer|mcp\.tool|@mcp_tool (MCP servers)celery|dramatiq|huey|rq\.job|@task|BackgroundTasks (background workers)WebSocket|socket\.io|ws\.on (WebSocket handlers)GraphQL|@Query|@Mutation|type Query (GraphQL endpoints)If fewer than 2 access paths found, tell the user: "This system appears to have a single access path — this audit is most valuable for systems with 2+ interfaces to the same core logic."
Scratch directory: /tmp/access-path-audit-<YYYY-MM-DD_HHMMSS>/research/
Launch two general-purpose subagents in parallel (Agent tool, subagent_type: "general-purpose"). Each writes findings to scratch dir, returns 2-4 line summary. Orchestrator does NOT read full research files.
research/path-inventory.md.scan-categories.md, maps enforcement across every access path. Writes to research/concern-mapping.md.Full subagent prompts and research file formats: See subagent-prompts.md in this skill directory.
Launch a third general-purpose subagent that reads both research files and builds the concern placement map. This is the core analytical step.
The convergence subagent must:
Classify each concern as transport-appropriate or domain-core:
Trace a representative operation (e.g., "search", "query") through each access path, noting exactly where each concern is applied
Build the consistency matrix: for each domain-core concern, is it enforced in the domain layer (shared) or only in some transport adapters (inconsistent)?
Identify shared code vs. duplicated logic: which cross-cutting implementations are shared (good) vs. copy-pasted across paths (maintenance risk)?
Writes to research/convergence.md. Returns 2-4 line summary.
Full convergence subagent prompt: See subagent-prompts.md.
Using the convergence summary, classify findings into four categories:
Category A: Genuine Gaps — domain concern missing from a path. Fix: push concern into domain core.
Category B: Misplaced Concerns — right concern, wrong layer. Fix: move to correct layer.
Category C: Appropriate Differences — paths correctly differ (transport-specific). Not a bug — list to show thoroughness.
Category D: Duplication Risk — shared logic copy-pasted. Fix: extract to shared module.
Severity for categories A and B:
| Severity | Definition |
|---|---|
| CRITICAL | Exploitable security gap or data integrity risk across paths |
| HIGH | Inconsistency that causes incorrect behavior under normal use |
| MEDIUM | Inconsistency that causes issues under edge cases or load |
| LOW | Maintenance risk or minor inconsistency with no immediate impact |
Category C findings are not bugs — list them to show the audit was thorough. Category D findings are always MEDIUM or LOW.
Present to the user:
MW = middleware, Route! = route-only should be domain, None! = missing gap, None* = correctly absent, Leaks! = present but broken)Present the findings and ask:
"Here are the access path audit findings. Would you like me to generate remediation plans? I'll group related fixes into PRs."
Do NOT proceed to Phase 2 without explicit confirmation.
Exit Plan Mode. Call ExitPlanMode to transition to execution mode. The deliberation phase is complete — doc generation requires the Write tool.
Output location:
documentation/planning/access-paths/<session_name>_<YYYY-MM-DD>/
├── 00_ACCESS_PATH_AUDIT.md
├── 01_<remediation-slug>.md
├── 02_<remediation-slug>.md
└── ...
Archive convention: See orchestration guide, Section 8.
Ask the user for a short session name, or derive one (e.g., domain-boundary, full-audit).
Master audit document containing:
Group related findings into single PRs where sensible. For example:
Each doc represents exactly 1 PR and must include:
Follow Section 9 of the orchestration guide (~/.claude/skills/_shared/orchestration-guide.md). Scratch directory: /tmp/access-path-audit-<YYYY-MM-DD_HHMMSS>/research/. Plan agents read research from this directory.
This skill supports --output github and --output session in addition to the default docs target.
Follow the output guide at ~/.claude/skills/_shared/output-guide.md:
github: use the structured issue body format (Section 4), check for duplicates (Section 4.5), apply labels (Section 4.3). Map Category A/B → priority:critical/priority:high, Category C → priority:medium, Category D → priority:low.session: present findings in chat, stay in Plan Mode (Section 5)docs (default): follow the subagent workflow in the orchestration guideAfter creating issues, present the batch summary and return issue URLs for audit tracking.
After generating all remediation docs, present a final summary:
Access Path Audit Summary
═══════════════════════════════════════════════════════════════════════════
Session: [name]
Date: [date]
Scope: [what was audited]
Architecture: [pattern] with [N] access paths
Domain Core: [key shared modules]
Findings: [total] ([critical] critical, [high] high, [medium] medium, [low] low)
Appropriate Diffs: [count] (not bugs — transport-specific)
Remediation Plans:
01 [title] [CRITICAL] [effort]
02 [title] [HIGH] [effort]
03 [title] [MEDIUM] [effort]
Ongoing Recommendations:
- Schedule periodic re-audit when new access paths are added
- New access paths should go through domain core, not duplicate transport logic
- Domain-core concerns should raise domain exceptions, not HTTP exceptions
Audit docs: documentation/planning/access-paths/<session>/
═══════════════════════════════════════════════════════════════════════════
Then tell the user:
"Plans are ready. Run /implement-plan documentation/planning/access-paths/<session>/ to start building — it will handle challenge review, branching, implementation, and PRs for each phase doc."
This skill produces plans, not code. Implementation is always handled by /implement-plan, which provides its own challenge round, verification, and PR workflow. Do NOT build, branch, or create PRs from this skill.
| Mistake | Fix |
|---|---|
| Treating every cross-path difference as a bug | Use Category C — transport-specific differences (CLI has no auth, HTTP has CORS) are correct. List them to prove thoroughness. |
| Tracing only middleware stacks | Trace at function-call depth — entry point → middleware → domain service → data layer. Middleware-only traces miss gaps in business logic. |
| Missing mounted sub-apps in dual mode | Sub-apps (e.g., MCP on FastAPI) inherit parent middleware in HTTP mode but NOT in stdio/standalone. Always check both deployment modes. |
| Missing LLM-mediated paths | Slack → ChatService → LLM → ToolExecutor is an indirect access path. Trace concerns through the full chain including the dispatch layer, not just the outer entry point. |
| Shallow "access path" definition | Any code path that can invoke domain logic with different cross-cutting behavior counts. Internal dispatch layers (tool executors, job runners) count if they have their own validation/error handling distinct from their caller. |
| Claiming definitive findings for graceful degradation | Scan category I is best audited through code review of error handling and timeout configs. Flag areas of concern for follow-up runtime testing rather than making definitive claims from static analysis alone. |
| Including secrets in findings | Never include connection strings or credentials verbatim — file:line references only. |
~/.claude/skills/_shared/orchestration-guide.md Sections 2 & 6. Three subagents in Phase 1 (two parallel, one sequential). Phase 2 uses Plan agents per Section 9. Orchestrator coordinates only.When --auto is set (see orchestration guide Section 10):
$ARGUMENTS as scope. If none provided, scan full system.========================================
Audit how well a Python/Postgres application's data model serves its codebase — traces code paths to DB interactions, maps intent to schema, finds where the model fights the application.
Persona: Senior data architect who reads code. Evidence-driven — every finding cites specific code paths and schema elements.
Follow steps in order. Call EnterPlanMode first — the entire skill is diagnostic (no write phase). If user declines, proceed read-only by convention. Never exit plan mode.
Do NOT read CLAUDE.md or MEMORY.md — already in system prompt.
Ask: (1) "What's the pain point?" and (2) "Any specific area to focus on?" Default to full codebase scan if no scope given.
Verify Python/Postgres/SQLAlchemy codebase. Run in parallel:
declarative_base|DeclarativeBase|mapped_column in *.py**/alembic/versions/*.pypsycopg|asyncpg|postgresql|postgres in *.{py,toml,cfg,txt,yml,yaml,env*}If none match, warn the user this skill targets Python/Postgres/SQLAlchemy and offer best-effort or alternative. Proceed on confirmation.
Scratch directory: /tmp/data-model-audit-<YYYY-MM-DD_HHMMSS>/research/
Launch two general-purpose subagents in parallel (Agent tool, subagent_type: "general-purpose"). Each writes findings to scratch dir, returns 2-4 line summary. Orchestrator does NOT read full research files. (General-purpose because Explore lacks Write tool.)
Discovers the complete data model — SQLAlchemy models, Alembic migrations, raw SQL, and discrepancies between them. Writes findings to research/schema-discovery.md.
Traces every code path from entry points (routes, CLI, tasks) through business logic to database interactions. Catalogs data access patterns, query patterns, and N+1 candidates. Writes findings to research/code-path-tracing.md.
Full subagent prompts, instructions, and research file formats: See subagent-prompts.md in this skill directory.
Launch a third general-purpose subagent that reads both research files, verifies against the codebase, and builds a code-to-schema convergence map (unused schema, write-only columns, read-hot tables, god-tables, structural workarounds, missing constraints, N+1 patterns). Writes to research/convergence.md.
Full convergence subagent prompt and checklist: See subagent-prompts.md in this skill directory.
The orchestrator receives the convergence summary (2-4 lines) and presents the code-to-schema map overview to the user for confirmation before proceeding to fit analysis.
Classify and rank findings into six categories: Schema Gaps, Schema Bloat, Structural Friction, Missing Constraints, Performance Anti-patterns, Model Drift. Use convergence summary + any user corrections from Step 3 gate.
Category definitions, severity guidance, examples: See gap-analysis-categories.md.
If convergence summary is insufficient, read specific sections of the research file — never the full file.
Print structured report: scope summary, code-to-schema map, ranked findings table, detailed findings with evidence/recommendations, total summary with top recommendation.
Report structure and finding template: See gap-analysis-categories.md.
~/.claude/skills/_shared/orchestration-guide.md Sections 2 & 6. Three subagents: two parallel (Step 2), one sequential (Step 3). Orchestrator coordinates only./implement-plan if desired.========================================
Audit frontend rendering performance by tracing render cycles, diagnosing fetch patterns, and producing phased remediation plans for /implement-plan.
Persona: Senior frontend performance engineer — traces render cascades methodically, maps symptoms to root causes before proposing fixes. Pragmatic: fix the bottleneck, not everything.
Parse $ARGUMENTS at invocation:
--auto: Fully non-interactive. Implies --output github. Requires page/flow in arguments. See orchestration guide Section 10.--output github: Write findings and remediation plans as GitHub Issues. See output guide (~/.claude/skills/_shared/output-guide.md).--output session: Present findings in chat only, no persistence./design-review/investigate-app/tech-debtFollow these steps in order. Enter Plan Mode (EnterPlanMode) before starting — all discovery and analysis is read-only. If the user declines, proceed read-only by convention.
Ask the user: (1) what's the symptom, (2) which page or flow, (3) how to reproduce. If vague, ask them to narrow to a specific page. Performance audits need a concrete entry point.
Scratch directory: /tmp/frontend-performance-audit-<YYYY-MM-DD_HHMMSS>/research/. All Explore agents write here and return 2-4 line summaries. Follow Explore Agent → Disk Pattern (orchestration guide, Section 2). Do NOT read CLAUDE.md/MEMORY.md in the orchestrator.
Map three areas via parallel Explore agents:
Present a brief architecture summary (framework, React version, affected route, component chain, data fetching, Suspense).
Scan the affected components across 8 categories using Explore subagents (one per category). Each agent reads scan-categories.md for detailed checklists, writes findings to the scratch directory, and returns a summary. Focus on the Phase 2 component tree only.
Categories: A. Render Cascades, B. Fetch Patterns, C. Observer & Listener Overhead, D. State Management, E. Memoization Gaps, F. Layout Stability, G. Framework-Specific Issues, H. Bundle & Loading
After all scans complete, present a consolidated findings table and render cascade diagram. Templates and severity definitions are in cascade-diagram-template.md (same directory).
Present findings and cascade diagram. Ask: "Would you like me to generate remediation plans? I'll group related fixes into PRs ordered by impact." Do NOT proceed without confirmation. Exit Plan Mode (ExitPlanMode) after confirmation — doc generation requires the Write tool.
Ask the user for a short session name (e.g., explain-page-flicker). Output to documentation/planning/performance/<session_name>_<YYYY-MM-DD>/. Archive convention: orchestration guide, Section 8.
00_PERF_AUDIT.md — Master audit: date, scope, symptom, architecture summary, findings table, cascade diagram, priority order, grouping rationale, dependency matrix.
Remediation Docs (01_, 02_, etc.) — Group related findings into single PRs. Each doc = exactly 1 PR containing: header (title, severity, effort, files), findings addressed, dependencies, root cause explanation with cascade chain, detailed implementation plan (file paths, line numbers, before/after code), verification checklist (DevTools + manual repro + build/test), and "What NOT To Do" section.
Subagent workflow: Follow orchestration guide Section 9. Plan agents read research from scratch directory. Quality requirements (beyond Section 4): explain render lifecycle per fix, draw before/after cascades, include DevTools verification.
After generating docs: "Plans are ready for review. Run /implement-plan on the session directory to execute them."
This skill supports --output github and --output session in addition to the default docs target.
Follow the output guide at ~/.claude/skills/_shared/output-guide.md:
github: use the structured issue body format (Section 4), check for duplicates (Section 4.5), apply labels (Section 4.3). Apply performance label. Map severity levels to priority labels. Group issues by cascade chain where applicable.session: present findings in chat, stay in Plan Mode (Section 5)docs (default): follow the subagent workflow in the orchestration guideAfter creating issues, present the batch summary and return issue URLs for audit tracking.
When --auto is set (see orchestration guide Section 10):
$ARGUMENTS (bail if missing — this skill can't auto-detect what to audit)========================================
Diagnostic scan of a project's Claude Code configuration for patterns that hurt prompt cache efficiency. Run this occasionally — like /tech-debt or /security-audit — to check your project's cache hygiene.
This is a read-only skill. It reads files and presents findings. It does not modify anything.
Claude Code's prompt caching works by prefix matching: static content at the start of the system prompt is cached and reused across turns. Anything that changes between turns or sessions invalidates the cache from that point forward. The main user-controlled factors are:
.claude/lessons.md adds variable content to every requestRun all six checks, then present the combined findings table. Do not ask questions or pause between checks — run them all and present results.
Checks (see cache-checks.md for detailed guidance and scoring):
.claude/lessons.md on-demand only, not auto-loaded.claude/rules/ files have correct paths: frontmatterEach check scores PASS / WARN / FAIL per the criteria in cache-checks.md.
After running all checks, present a single findings table:
Cache Audit Results
═══════════════════════════════════════════════════════════════════════════
Check Status Notes
───────────────────────── ────── ─────────────────────────────
Section ordering PASS Static-first layout detected
CLAUDE.md size WARN 247 lines (threshold: 200)
Lessons isolation PASS On-demand only
Tool & model stability PASS No mid-session changes
Mid-session edits WARN "Update continuously" language
Rules file config PASS All rules have paths: frontmatter
═══════════════════════════════════════════════════════════════════════════
6 checks: 4 passed, 2 warnings, 0 failures
Then, for each WARN or FAIL item, provide a brief recommendation explaining the caching impact and a concrete suggestion.
========================================
Consolidated from legacy skills. Pick the mode/lens based on intent.