Show discipline analytics — defect escape rate, override rate, cost metrics, and agent scorecard
From clavainnpx claudepluginhub mistakeknot/interagency-marketplace --plugin clavainThis skill uses the workspace's default tool permissions.
SKILL-compact.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
GALIANA_SCRIPT=$(find ~/.claude/plugins/cache -path '*/clavain/*/galiana/analyze.py' 2>/dev/null | head -1)
[[ -z "$GALIANA_SCRIPT" ]] && GALIANA_SCRIPT=$(find ~/projects -path '*/os/clavain/galiana/analyze.py' 2>/dev/null | head -1)
If found: python3 "$(dirname "$(dirname "$GALIANA_SCRIPT")")/galiana/analyze.py"
If not found: explain Galiana is not installed and stop.
Read ~/.clavain/galiana-kpis.json. If missing or no KPI payload, show a getting-started message explaining KPI events come from disciplined review/testing workflows and defect logging.
Galiana — Discipline Analytics
════════════════════════════════
Period: YYYY-MM-DD to YYYY-MM-DD
Beads shipped: N | Events analyzed: N
KPIs (diagnostic-first)
────────────────────────
[worst metric first, healthiest last]
Format: KPI Name: value (numerator/denominator)
Agent Scorecard
────────────────
Agent Findings P0 P1
fd-architecture 5 0 2
fd-quality 3 1 1
Advisories
──────────
- [info] ...
- [warn] ...
If topology_efficiency.available is true:
Topology Efficiency (recall vs production)
──────────────────────────────────────────
Task Type T2 T4 T6 T8 Samples
code_review 0.60 0.85 0.95 0.95 12
Highlight the smallest topology achieving >=90% recall ("sweet spot"). If < 20 total experiments: "Data still accumulating."
If eval_health.available is true:
Eval Harness Health
──────────────────────────────────────────
Overall: 85% property pass rate | Avg recall: 0.92 | 20 total runs
Fixture Pass Rate Recall Runs
synth-sql-injection 1.00 0.95 4
Highlight any fixture with pass_rate < 1.0. If < 10 total runs: "Limited data."
After report: Want per-bead analytics? Run Galiana with --bead <id>.