By codagent-ai
Automate agent-validator CLI workflows for AI coding agents: run full checks via Bash to validate code changes, extract/fix failures before git commit/merge, skip/advance baselines, diagnose failures from logs and artifacts, setup project configs, check status summaries, and draft structured GitHub issues with evidence.
npx claudepluginhub codagent-ai/agent-validator --plugin agent-validatorRuns validator checks only without AI reviews for requests such as "run validator checks", "check without reviews", or "validate before commit without AI review".
Handles commit flows by detecting changes, optionally running validator validation, and completing commits for requests such as "commit with validator", "run checks before commit", "run validator then commit", or "skip validator and commit".
Diagnoses and explains validator behavior from runtime evidence for requests such as "why did validator fail", "explain validator behavior", "diagnose validator logs", or "what went wrong in the validator run".
Files structured GitHub bug reports for agent-validator when users ask to file, report, or open an issue for a suspected defect
Runs the full validator workflow after coding tasks for requests such as "run the validator", "run final verification", "validate before commit", or "run validation". Executes checks and reviews before commit, push, or PR creation.
Scans the project and configures checks and reviews for Agent Validator for requests such as "set up validator", "configure checks and reviews", or "initialize validator for this repo".
Advances the validator execution state baseline without running checks for requests such as "skip validator", "advance validator baseline", or "mark current tree as validated without running checks".
Shows a summary of the most recent validator session for requests such as "show validator status", "summarize last validator run", or "what failed in the last validator session".
Check how well your repo supports AI coding agents.
Share bugs, ideas, or general feedback.
Analyze and enforce best practices for AI coding agent projects. Assess codebase readiness across 8 pillars with /readiness, then scaffold enforcement with /setup: TDD, secret scanning, file size limits, auto-generated docs, and git hooks.
The operational layer for coding agents. Bookkeeping, validation, and flows that compound knowledge between sessions.
Design and review AI agent systems — architecture patterns, workflow design, and plugin quality review
Validation and quality enforcement for Claude Agent SDK projects with TypeScript type checking and structure validation.
Open-source testing and regression detection framework for AI agents. Golden baseline diffing, CI/CD integration, works with LangGraph, CrewAI, OpenAI, Anthropic Claude, HuggingFace, Ollama, and MCP.