Entrix
Shift quality from manual review to executable change guardrails.
Install
Choose one installation path:
Claude Code Plugin (recommended)
/plugin marketplace add phodal/entrix
/plugin install entrix@entrix
Restart Claude Code after plugin installation.
Standalone CLI (uv or pip)
uv tool install entrix
# or
pip install entrix
entrix --help
If you want Claude Code MCP integration in the current repository after installing the CLI:
entrix install --repo .
Requires Python 3.10+. uv is only needed for the uv / uvx workflow.
What it does
- codify quality gates and architecture constraints as reusable fitness specs
- run checks by
fast / normal / deep tiers
- run change-aware checks on diffs with weighted scoring and hard gates
- route risky changes to deeper validation with
review-trigger
- optionally add graph-based impact, test-radius, and review context analysis
Guardrails in the change lifecycle
- checks run before risky code lands
- each run generates evidence
- policy can hard-stop, warn, or escalate to human review automatically
Lifecycle View

Additional design context:
tools/entrix/docs/adr/README.md: Entrix architecture decisions and rationale
Requirements
- Python 3.10+
- Git repository context for commands that use
--base HEAD~1
Optional:
uv for uv tool install ... and uvx ...
pip install entrix[graph] for graph commands
Advanced Installation
Alternate CLI Invocations
CLI invocation options
uv tool install entrix
# or
pip install entrix
uvx entrix --help
uvx entrix run --tier fast
uvx entrix run --tier normal --stream failures
uvx entrix run --tier normal --stream all
uvx entrix review-trigger --base HEAD~1
Optional extras
pip install entrix[graph]
pip install entrix[mcp]
pip install entrix[dev]
uvx entrix install --repo .
First Run
1. Create a fitness spec
By default, entrix run looks for specs under the current project's:
docs/fitness/*.md
When docs/fitness/manifest.yaml is present, Entrix uses the manifest as the
source of truth. That allows nested evidence files such as
docs/fitness/runtime/observability.md and docs/fitness/runtime/performance.md.
Example docs/fitness/code-quality.md:
---
dimension: code_quality
weight: 20
threshold:
pass: 90
warn: 80
metrics:
- name: lint
command: npm run lint 2>&1
hard_gate: true
tier: fast
description: ESLint must pass
- name: unit_tests
command: npm run test:run 2>&1
pattern: "Tests\\s+\\d+\\s+passed"
hard_gate: true
tier: normal
description: unit tests must pass
---
# Code Quality
Narrative evidence, rules, and ownership notes can live below the frontmatter.
Advanced metric fields
Beyond the basic fields shown above, each metric in the frontmatter supports additional options:
metrics:
- name: api_contract
command: npm run test:contract 2>&1
hard_gate: false
tier: normal
description: API contract tests
# Execution scope — where this metric is authoritative
# Values: local, ci, staging, prod_observation
execution_scope: ci
# Timeout in seconds (null = no limit)
timeout_seconds: 120
# Gate severity: hard, soft, advisory
gate: soft
# Evidence type: command, test, probe, sarif, manual_attestation
evidence_type: test
# Confidence level: high, medium, low, unknown
confidence: high
# Signal stability: deterministic, noisy
stability: deterministic
# Fitness kind: atomic (single check) or holistic (system-wide)
kind: atomic
# Analysis mode: static (code structure) or dynamic (runtime)
analysis: dynamic
# Owner responsible for this metric
owner: team-platform
# Only run when these file patterns change
run_when_changed:
- "src/api/**"
- "openapi.yaml"