From openhands-skills
Evaluates codebase readiness for autonomous AI development across 8 pillars (style/validation, build, testing, docs, dev env, debugging/observability, security, tasks) and 5 maturity levels. Use /readiness-report to scan repos.
npx claudepluginhub openhands/extensionsThis skill uses the workspace's default tool permissions.
Evaluate how well a repository supports autonomous AI development by analyzing it across eight technical pillars and five maturity levels.
Evaluates repository readiness for autonomous AI-assisted development across five pillars and 74 features using bash scanner scripts and manual checks.
Assesses codebase for AI agent readiness by detecting stacks, monorepos, git setup, and evaluating style, testing, code quality, secrets, and file sizes.
Assesses codebase readiness for agents/production across 8 pillars, verifies commands/GitHub settings, reports all issues, auto-fixes agent readiness.
Share bugs, ideas, or general feedback.
Evaluate how well a repository supports autonomous AI development by analyzing it across eight technical pillars and five maturity levels.
Agent Readiness measures how prepared a codebase is for AI-assisted development. Poor feedback loops, missing documentation, or lack of tooling cause agents to waste cycles on preventable errors. This skill identifies those gaps and prioritizes fixes.
The user will run /readiness-report to evaluate the current repository. The agent will then:
Execute the analysis script to gather signals from the repository:
python scripts/analyze_repo.py --repo-path .
This script checks for:
After analysis, generate the formatted report:
python scripts/generate_report.py --analysis-file /tmp/readiness_analysis.json
The report includes:
Each pillar addresses specific failure modes in AI-assisted development:
| Pillar | Purpose | Key Signals |
|---|---|---|
| Style & Validation | Catch bugs instantly | Linters, formatters, type checkers |
| Build System | Fast, reliable builds | Build docs, CI speed, automation |
| Testing | Verify correctness | Unit/integration tests, coverage |
| Documentation | Guide the agent | AGENTS.md, README, architecture docs |
| Dev Environment | Reproducible setup | Devcontainer, env templates |
| Debugging & Observability | Diagnose issues | Logging, tracing, metrics |
| Security | Protect the codebase | CODEOWNERS, secrets management |
| Task Discovery | Find work to do | Issue templates, PR templates |
| Product & Analytics | Error-to-insight loop | Error tracking, product analytics |
See references/criteria.md for the complete list of 81 criteria per pillar.
| Level | Name | Description | Agent Capability |
|---|---|---|---|
| L1 | Initial | Basic version control | Manual assistance only |
| L2 | Managed | Basic CI/CD and testing | Simple, well-defined tasks |
| L3 | Standardized | Production-ready for agents | Routine maintenance |
| L4 | Measured | Comprehensive automation | Complex features |
| L5 | Optimized | Full autonomous capability | End-to-end development |
Level Progression: To unlock a level, pass ≥80% of criteria at that level AND all previous levels.
See references/maturity-levels.md for detailed level requirements.
Fix gaps in this order:
scripts/analyze_repo.py - Repository analysis scriptscripts/generate_report.py - Report generation and formattingreferences/criteria.md - Complete criteria definitions by pillarreferences/maturity-levels.md - Detailed level requirementsAfter reviewing the report, common fixes can be automated:
Ask to "fix readiness gaps" to begin automated remediation of failing criteria.