Audit a codebase for agent-readiness gaps. Score each category. Optionally bootstrap missing pieces with --fix.
From repo-readinessnpx claudepluginhub app-vitals/marketplace --plugin repo-readinessThis skill uses the workspace's default tool permissions.
readiness-criteria.yamlreferences/scoring.mdFetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Retrieves current documentation, API references, and code examples for libraries, frameworks, SDKs, CLIs, and services via Context7 CLI. Ideal for API syntax, configs, migrations, and setup queries.
Uses ctx7 CLI to fetch current library docs, manage AI coding skills (install/search/generate), and configure Context7 MCP for AI editors.
Audit the current codebase for legibility gaps that hurt agent performance. Score each category. Write a structured report. Use --fix to generate the missing pieces.
This skill does not modify code unless --fix is passed.
Before starting, check flags:
--fix — run the full audit, then generate bootstrap assets for each failing check--category <id> — audit only the named category (e.g., --category agent_context)--no-report — print summary to stdout; skip writing readiness-report.mdCheck if .claude/repo-readiness/criteria.yaml exists in the project root.
skills/repo-readiness/readiness-criteria.yaml.If --category was passed, filter to only that category. Otherwise audit all.
For each category in the criteria config, run each check using its detection_hint as guidance. Record: pass/fail, severity, and a brief note (e.g., how many files had the issue, or what was missing).
Do not modify any files during this step — read-only.
Each check has a detection_hint describing what to look for. Use Glob, Grep, Read, and Bash (for line counts) to answer each check. Be specific: if a check asks for file counts or link counts, produce the actual number.
Efficiency tip: batch Glob/Grep calls where possible. Run category checks in parallel mentally — but write them sequentially in the report.
For each category, compute the score using the methodology in references/scoring.md:
Determine the band for each category and for the overall weighted score.
Critical gap check: If any critical severity check failed, cap overall band at "Not Ready" regardless of weighted score.
Unless --no-report was passed:
readiness-report.md to the project root.references/scoring.md.Print a summary to stdout:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
REPO READINESS — {repo_name}
Overall: {score}/100 — {band}
Agent Context Files: {score}/100
In-Repo Documentation: {score}/100
Codebase Structure: {score}/100
Test Coverage: {score}/100
Observability: {score}/100
{N} critical {N} high {N} medium {N} low gaps found.
Report written: readiness-report.md
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
If --no-report was passed, print the same summary but skip writing the file.
Only runs when --fix is passed. For each failing check, generate the missing asset and write it to the project.
Generate a starter CLAUDE.md at the repo root:
# {repo_name}
## What This Is
{one sentence inferred from README or package.json description}
## Architecture
{bullet list of top-level directories with one-line purpose each}
## Key Conventions
- {primary language/runtime, e.g. "TypeScript/Bun"}
- {test command, e.g. "bun test"}
- {lint command if found}
## What's Not Here
{placeholder — fill in links, external context, etc.}
Announce: "Created CLAUDE.md. Fill in the 'What's Not Here' section with any external context (Notion, Confluence, etc.) worth surfacing."
For each subdirectory with more than 5 source files, generate a stub:
# {dirname}/
## Purpose
{one-line description inferred from file names and README}
## Key Files
{bullet list of main entry points found in this directory}
Write to {subdir}/CLAUDE.md. Announce each file created.
Print a warning with the actual line count. Do not auto-edit — CLAUDE.md content is human-reviewed. Suggest splitting by telling the user which sections are candidates for subdirectory files.
List all external links found. Print a suggestion for each:
Notion link in CLAUDE.md (line 34): consider moving the content to docs/architecture.md
Do not auto-remove or auto-replace links.
Identify which sections are missing (what/run/test). Generate minimal stubs:
## Getting Started
{placeholder — add install + run instructions}
## Running Tests
{placeholder — add test command}
Append to existing README.md (or create README.md if absent). Announce.
Create a docs/decisions/ directory with one template file:
# ADR-001: {title}
**Status:** proposed | accepted | deprecated | superseded
**Date:** {today}
## Context
{What is the issue that motivates this decision?}
## Decision
{What is the change that we're proposing or have agreed to implement?}
## Consequences
{What becomes easier or harder as a result of this change?}
Announce: "Created docs/decisions/ADR-001-template.md. Document your first architectural decision here."
List top-level source files that are flat in a single directory. Print a suggested directory layout based on what the files do (infer from names). Do not move files.
Print a table of offenders with line counts. For each, suggest splitting at its natural seam (e.g., if a 700-line file has 3 exported classes, suggest extracting each to its own file). Do not auto-split.
Detect the primary language/runtime and suggest an appropriate test framework:
bun test (built-in, no install)go test (built-in)cargo test (built-in)Print setup instructions. Do not auto-install.
Print a list of source files with no corresponding test file. Suggest a starter test for the most critical-looking file (e.g., the main entry point or the largest exported module):
// Suggested starter: {source_file}.test.ts
import { ... } from './{source_file}';
describe('{module_name}', () => {
it('should ...', () => {
// TODO: implement
});
});
Write the starter test file. Announce.
Detect the runtime and suggest a logger:
Do not auto-install or auto-replace console.log calls.
Detect the framework and add a minimal health route:
app.get('/health', (c) => c.json({ ok: true }))app.get('/health', (req, res) => res.json({ ok: true }))@app.get('/health') async def health(): return {"ok": True}Write the change to the server file. Announce.
These checks are flagged in the report but require manual remediation:
After all checks (and bootstrap if --fix):
Print:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
DONE
Score: {before}/100 → {after}/100 (if --fix ran)
{score}/100 (if audit only)
{N} assets generated. (if --fix)
{N} gaps remain — see readiness-report.md.
Pairs well with:
/entropy-scan — ongoing drift detection after readiness is established
/plan-session — structured planning once the repo is agent-readable
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━