By nizos
Enforce TDD discipline and policy guardrails in AI coding agents like Claude Code via PreToolUse hooks. Run 'npx @nizos/conduct' checks before Bash, Write, or Edit tool calls to block risky operations, safely execute bash commands, and manage file modifications.
npx claudepluginhub nizos/probity --plugin probityProcess discipline for AI coding agents.
Probity catches what coding agents do wrong (over-implementing past the failing test, disabling tests instead of fixing them, reaching for rm -rf) using the hook system your agent already exposes.
Each agent action (file write, shell command) fires a hook. Probity evaluates the action against your configured rules and decides whether it goes through. When it blocks, the agent gets a reason and a path forward:
probity: production code is being added before any failing test was written
or observed.
The next TDD-legal step is to add one focused test in src/cart.test.ts and
run it to a clean assertion failure before implementing only the minimum code
to pass it.
Probity grew out of tdd-guard, built to be the better foundation for the work ahead: rules beyond TDD, agents beyond Claude Code.
Install Probity as a dev dependency, then wire it into your agent:
npm install -D @nizos/probity
Create a probity.config.ts at your project root.
Here's a starter that enforces TDD on src/ and test/, and blocks eslint-disable comments:
import { defineConfig, enforceTdd, forbidContentPattern } from '@nizos/probity'
export default defineConfig({
rules: [
{
files: ['src/**', 'test/**'],
rules: [
enforceTdd(),
forbidContentPattern({
match: 'eslint-disable',
reason: 'Fix the lint violation rather than disabling the rule',
}),
],
},
],
})
Contributions are welcome! See the contributing guidelines to get started.
Automated Test-Driven Development enforcement
Executes bash commands
Hook triggers when Bash tool is used
Modifies files
Hook triggers on file write and edit operations
Share bugs, ideas, or general feedback.
PLAN/ACT/EVAL workflow with auto-detection, specialist agents, and reusable skills for systematic TDD development
Analyze and enforce best practices for AI coding agent projects. Assess codebase readiness across 8 pillars with /readiness, then scaffold enforcement with /setup: TDD, secret scanning, file size limits, auto-generated docs, and git hooks.
SDLC enforcement for AI agents — TDD, planning, self-review, CI shepherd
Automatic code review on session stop with configurable rules
Agent Alchemy TDD Tools — Test Driven Development tools for AI agents