From ouroboros
Evaluates execution sessions via three-stage pipeline: mechanical verification (lint/build/tests), semantic evaluation (AC compliance/goal alignment), optional multi-model consensus. Triggered by 'evaluate this' or /ouroboros:evaluate.
npx claudepluginhub q00/ouroboros --plugin ouroborosThis skill uses the workspace's default tool permissions.
Evaluate an execution session using the three-stage verification pipeline.
Evaluates TandemKit Generator output against specs using Codex as second opinion. Autonomous verification loops via bash state watchers and signals until pass or user intervention.
Evaluates agent session work quality at end using 4D weighted scoring (completeness 35%, honesty 30%, deferral 20%, evidence 15%). Verifies tests, catches rationalizations, generates handoff artifacts with git diffs.
Checks Ouroboros execution session status and measures goal drift from seed objectives. Useful for monitoring progress and detecting deviations via /status.
Share bugs, ideas, or general feedback.
Evaluate an execution session using the three-stage verification pipeline.
/ouroboros:evaluate <session_id> [artifact]
Trigger keywords: "evaluate this", "3-stage check"
The evaluation pipeline runs three progressive stages:
Stage 1: Mechanical Verification ($0 cost)
Stage 2: Semantic Evaluation (Standard tier)
Stage 3: Multi-Model Consensus (Frontier tier, optional)
When the user invokes this skill:
The Ouroboros MCP tools are often registered as deferred tools that must be explicitly loaded before use. You MUST perform this step before proceeding.
ToolSearch tool to find and load the evaluate MCP tool:
ToolSearch query: "+ouroboros evaluate"
mcp__plugin_ouroboros_ouroboros__ouroboros_evaluate (with a plugin prefix). After ToolSearch returns, the tool becomes callable.IMPORTANT: Do NOT skip this step. Do NOT assume MCP tools are unavailable just because they don't appear in your immediate tool list. They are almost always available as deferred tools that need to be loaded first.
Determine what to evaluate:
session_id provided: Use it directlyGather the artifact to evaluate:
Call the ouroboros_evaluate MCP tool:
Tool: ouroboros_evaluate
Arguments:
session_id: <session ID>
artifact: <the code/output to evaluate>
seed_content: <original seed YAML, if available>
acceptance_criterion: <specific AC to check, optional>
artifact_type: "code" (or "docs", "config")
trigger_consensus: false (true if user requests Stage 3)
Present results clearly:
๐ Done! Your implementation passes all checks. Optional: ooo evolve to iteratively refinecode_changes_detected: true): ๐ Next: Fix the build/test failures above, then ooo evaluate โ or ooo ralph for automated fix loopcode_changes_detected: false): ๐ Next: Run ooo run first to produce code, then ooo evaluate๐ Next: ooo run to re-execute with fixes โ or ooo evolve for iterative refinement๐ Next: ooo interview to re-examine requirements โ or ooo unstuck to challenge assumptionsIf the MCP server is not available, use the ouroboros:evaluator agent to perform a prompt-based evaluation:
ouroboros:evaluator agentUser: /ouroboros:evaluate sess-abc-123
Evaluation Results
============================================================
Final Approval: APPROVED
Highest Stage Completed: 2
Stage 1: Mechanical Verification
[PASS] lint: No issues found
[PASS] build: Build successful
[PASS] test: 12/12 tests passing
Stage 2: Semantic Evaluation
Score: 0.85
AC Compliance: YES
Goal Alignment: 0.90
Drift Score: 0.08
๐ Done! Your implementation passes all checks. Optional: `ooo evolve` to iteratively refine