From agent-validator
Scans project structure (monorepo, split, single) and tooling to configure or reconfigure Agent Validator checks and reviews. Use for 'set up validator' or repo init.
npx claudepluginhub codagent-ai/agent-validator --plugin agent-validatorThis skill is limited to using the following tools:
Scan the project to discover tooling and configure checks and reviews for agent-validator.
Configures project tech stack and review agents by auto-detecting from files like Gemfile, package.json, tsconfig.json, pyproject.toml. Toggles agents off and saves config to .beads/config/project-setup.md for new projects or reconfiguration.
Runs full validator workflow: executes agent-validate checks via Bash, extracts and reports failures from logs using Task, applies fixes before commit/push/PR. For 'run validator' requests.
Share bugs, ideas, or general feedback.
Scan the project to discover tooling and configure checks and reviews for agent-validator.
Before starting, read references/check-catalog.md for check category details, YAML schemas, and example configurations.
Read .validator/config.yml. If the file does not exist, tell the user to run agent-validate init first and STOP — do not proceed with any further steps.
Read the entry_points field from .validator/config.yml.
If entry_points is empty ([]): This is a fresh setup. Proceed to Step 3 (detect project structure).
If entry_points is populated: Show the user a summary of the current configuration:
List each entry point with its path, checks, and reviews
Then ask the user which action to take:
entry_points..validator/checks/*.yml file to .yml.bak (overwrite any previous .bak files) — these are legacy file-based checks.validator/reviews/*.md file to .md.bak (overwrite any previous .bak files).validator/reviews/*.yml files (these are built-in review configs)entry_points to [] in config.ymlScan for signals to classify the project as monorepo, split project, or single project.
package.json with a workspaces fieldpnpm-workspace.yamllerna.json, nx.json, turbo.jsonCargo.toml with a [workspace] sectionpackages/, apps/, or services/ each containing their own project manifest (package.json, go.mod, Cargo.toml, pyproject.toml)frontend/ + backend/ (or client/ + server/, web/ + api/) directories each containing source code and/or their own project manifestapps/web/, apps/api/, apps/worker/ each with their own source and config) — suggests a wildcard entry point like apps/*src/ or lib/ as sole source directory, or source files at project rootIf monorepo or split project: Read references/project-structure.md for detailed multi-project entry point guidance, then follow it for Steps 4 through 8. The rest of this file covers the single-project flow.
If single project: Tell the user what you detected and continue below.
Infer the source directory:
src/ exists and contains source code, suggest srclib/ exists and contains source code, suggest lib. (project root — safer default since it captures all changes)Skip this step if adding checks to an existing entry point that already has a path.
Scan the project for tooling signals across the 6 check categories listed in references/check-catalog.md.
For the "add checks" path: Filter out checks already configured in entry_points.
If no tools discovered: Offer the custom flow (skip to Step 7). Still include code-quality review.
Show a table of discovered checks:
Category | Tool | Command | Confidence
----------------|-----------------|--------------------------------------|-----------
Build | npm | npm run build | High
Lint | ESLint | npx eslint . | High
Typecheck | TypeScript | npx tsc --noEmit | High
Test | Jest | npx jest | High
Security (deps) | npm audit | npm audit --audit-level=moderate | Medium
Security (code) | Semgrep | semgrep scan --config auto --error . | Medium
Confidence levels:
If a category has no discovered tool, show (not found) with — for command and confidence.
Ask the user:
If the user declines ALL checks, still include code-quality review and offer the custom flow (Step 7).
After confirmation, proceed to Step 8 (create files).
Ask the user: check (shell command) or review (AI code review)?
For checks: Ask for command, name, and optional settings (run_in_ci, run_locally).
For reviews: Built-in (code-quality) or custom prompt? Ask for name and write the review content.
Checks — Add checks inline in the entry point's checks array. Each inline check is a single-key object (check name → config object). Include command. Add optional fields (run_in_ci, run_locally) only when they differ from defaults. See references/check-catalog.md for schema. Do NOT add a top-level checks map — inline checks belong under entry_points.
Custom reviews — Create .validator/reviews/<name>.md with YAML frontmatter (num_reviews: 1) and review prompt.
Built-in reviews — Add built-in reviews inline in the entry point's reviews array (e.g. - code-quality: { builtin: code-quality }). Do not create a separate file for built-in reviews.
Update entry_points in .validator/config.yml:
entry_points:
- path: "<source_dir>"
checks:
- build:
command: npm run build
- lint:
command: npx eslint .
reviews:
- code-quality:
builtin: code-quality
Always include code-quality in reviews for fresh setups. For "add checks" / "add custom": append to the appropriate entry point's lists, or add a new entry point if needed. A check or review defined inline in one entry point can be referenced by name (as a string) in other entry points.
Ask the user. If yes, loop to Step 7. If no, proceed.
Run agent-validate validate. If it fails, apply one corrective attempt and re-validate. If it still fails, STOP and ask the user.
Commit all validator configuration and skills so the setup is preserved in version control:
.validator/, .claude/skills/validator-*/, .claude/settings.local.json, .gitignoregit commit -m "chore: configure agent-validator checks and reviews"If there are no changes to commit (everything already committed), skip this step silently.
Run /validator-skip to advance the execution state baseline to the current working tree, so the next run only diffs against future changes.
Tell the user: configuration is complete. Run /validator-run to execute, or /validator-setup again to add more.