By jrenaldi79
Assess codebase readiness for AI coding agents across 8 pillars—git setup, testing, code quality, secrets, file sizes—using /readiness, then bootstrap enforcement tooling with /setup for TDD, secret scanning, file limits, git hooks, and auto-generated docs in Node/TS, Python, Go, or Rust projects.
npx claudepluginhub jrenaldi79/harness-engineering --plugin harness-engineeringUse when the user wants to analyze, audit, or assess their codebase for AI agent readiness. Also use for "readiness report", "how ready is my project", "analyze my codebase", "audit my repo", "check my setup", or "what should I improve".
Use when the user wants to set up a new project or add enforcement tooling (TDD, secret scanning, file size limits, git hooks, CLAUDE.md templates) to an existing project. Also use when the user says "bootstrap", "scaffold", "set up my project", or "add quality enforcement".
Check how well your repo supports AI coding agents.
Share bugs, ideas, or general feedback.
Makes a repo agent-ready: AGENTS.md, boundary tests, CI pipeline, GC scripts — based on OpenAI's harness engineering methodology
Agent-Ready Codebase Assessment — scores your codebase across 8 dimensions and generates an actionable improvement roadmap framed around the Stripe AI benchmark
SDLC enforcement for AI agents — TDD, planning, self-review, CI shepherd
Self-learning AI agents that improve over time. Features agent memory (tracks last 100 executions), pattern recognition, community standards contribution, and workflow optimization. Includes security, quality, deployment, infrastructure, and compliance workflows.
Harness Engineering framework - skills, agents, and commands for safe, reviewable, incremental agent-driven development. Includes RPEQ workflow (Research, Plan, Execute, QA), ast-grep setup, and codebase analysis tools.