Designs falsifiable governance constraints for HARNESS.md by translating governance language into operational verifications, evidence requirements, and failure actions. Guides authoring workflows and reviews.
npx claudepluginhub habitat-thinking/ai-literacy-superpowers --plugin ai-literacy-superpowersThis skill uses the workspace's default tool permissions.
A governance constraint encodes operational meaning, not governance
Guides governance audits: detects semantic drift in constraints, inventories governance debt, scores falsifiability, checks three-frame alignment for governance-auditor agent.
Encodes human-readable governance policies into machine-executable JSON constraints for AI agents and CI pipelines to validate automatically. Outputs rule files in .ai/governance/.
Authors enforceable project constitutions for greenfield projects with testable principles, enforcement mechanisms, rationale, and amendment processes.
Share bugs, ideas, or general feedback.
A governance constraint encodes operational meaning, not governance language. The phrase "ensure human oversight" is governance language — it sounds precise but means different things to different people. A governance constraint translates that language into a verification slot with defined pass/fail criteria, evidence requirements, and failure actions.
This skill teaches how to make that translation. It is referenced by
the /governance-constrain command for guided authoring and by the
harness-enforcer agent when validating governance constraint quality.
Governance language carries meaning in one reference frame but is implemented in another. The regulator writes "meaningful human oversight." The engineer implements a boolean approval gate. The compliance team audits the approval log. All three frames are satisfied syntactically while governance fails semantically — the approval happens, but the oversight is absent.
A governance constraint must make this translation explicit.
Every governance constraint must answer three questions:
If the constraint cannot answer all three, it is governance language pretending to be a constraint. It belongs in a policy document, not in HARNESS.md.
Before writing a governance constraint, articulate what the governance requirement means from three perspectives:
| Frame | Question | Example ("human review required") |
|---|---|---|
| Engineering | What must the reviewer technically verify? | Reviewer must check that generated code has tests, follows naming conventions, and handles error paths |
| Compliance | What audit trail must exist? | PR approval record with reviewer name, timestamp, and at least one substantive comment |
| AI system | What does the automated gate check? | PR cannot merge without at least one approved review from a CODEOWNERS member |
Flag divergence. If the three frames describe different things, the governance requirement is ambiguous. Resolve the ambiguity before writing the constraint — do not push it into HARNESS.md and hope enforcement resolves it.
Every governance constraint in HARNESS.md should use this extended
format (in addition to the standard Rule/Enforcement/Tool/Scope
fields from the constraint-design skill):
See references/governance-constraint-template.md for the full
template with examples.
Governance constraints follow the same promotion ladder as all
constraints (see the constraint-design skill):
harness-enforcer reviews against the constraint
prose using LLM judgementStart at unverified. Promote when the constraint language is stable and you have confidence in the verification method. Most governance constraints will land at agent enforcement because governance meaning requires judgement — few governance checks can be reduced to a shell command.
constraint-design
skill directlygovernance-audit-practice skillgovernance-observability
skillSee references/anti-patterns.md for the full gallery of governance
constraint anti-patterns with falsifiable rewrites.