npx claudepluginhub q00/ouroborosWant just this skill?
Add to a custom plugin, then install with one command.
Evaluate execution with three-stage verification pipeline
This skill uses the workspace's default tool permissions.
/ouroboros:evaluate
Evaluate an execution session using the three-stage verification pipeline.
Usage
/ouroboros:evaluate <session_id> [artifact]
Trigger keywords: "evaluate this", "3-stage check"
How It Works
The evaluation pipeline runs three progressive stages:
-
Stage 1: Mechanical Verification ($0 cost)
- Lint checks, build validation, test execution
- Static analysis, coverage measurement
- Fails fast if mechanical checks don't pass
-
Stage 2: Semantic Evaluation (Standard tier)
- AC compliance assessment
- Goal alignment scoring
- Drift measurement
- Reasoning explanation
-
Stage 3: Multi-Model Consensus (Frontier tier, optional)
- Multiple models vote on approval
- Only triggered by uncertainty or manual request
- Majority ratio determines outcome
Instructions
When the user invokes this skill:
Load MCP Tools (Required first)
The Ouroboros MCP tools are often registered as deferred tools that must be explicitly loaded before use. You MUST perform this step before proceeding.
- Use the
ToolSearchtool to find and load the evaluate MCP tool:ToolSearch query: "+ouroboros evaluate" - The tool will typically be named
mcp__plugin_ouroboros_ouroboros__ouroboros_evaluate(with a plugin prefix). After ToolSearch returns, the tool becomes callable. - If ToolSearch finds the tool โ proceed with the MCP-based evaluation below. If not โ skip to Fallback section.
IMPORTANT: Do NOT skip this step. Do NOT assume MCP tools are unavailable just because they don't appear in your immediate tool list. They are almost always available as deferred tools that need to be loaded first.
Evaluation Steps
-
Determine what to evaluate:
- If
session_idprovided: Use it directly - If no session_id: Check conversation for recent execution session IDs
- If
-
Gather the artifact to evaluate:
- If user specifies a file: Read it with Read tool
- If recent execution output exists in conversation: Use that
- Ask user if unclear what to evaluate
-
Call the
ouroboros_evaluateMCP tool:Tool: ouroboros_evaluate Arguments: session_id: <session ID> artifact: <the code/output to evaluate> seed_content: <original seed YAML, if available> acceptance_criterion: <specific AC to check, optional> artifact_type: "code" (or "docs", "config") trigger_consensus: false (true if user requests Stage 3) -
Present results clearly:
- Show each stage's pass/fail status
- Highlight the final approval decision
- If rejected, explain the failure reason
- Suggest fixes if evaluation fails
- Always end with a ๐ suggestion based on the outcome:
- APPROVED:
๐ Done! Your implementation passes all checks. Optional: ooo evolve to iteratively refine - REJECTED at Stage 1 (mechanical,
code_changes_detected: true):๐ Next: Fix the build/test failures above, then ooo evaluate โ or ooo ralph for automated fix loop - REJECTED at Stage 1 (mechanical,
code_changes_detected: false):๐ Next: Run ooo run first to produce code, then ooo evaluate - REJECTED at Stage 2 (semantic):
๐ Next: ooo run to re-execute with fixes โ or ooo evolve for iterative refinement - REJECTED at Stage 3 (consensus):
๐ Next: ooo interview to re-examine requirements โ or ooo unstuck to challenge assumptions
- APPROVED:
Fallback (No MCP Server)
If the MCP server is not available, use the ouroboros:evaluator agent to perform a prompt-based evaluation:
- Delegate to
ouroboros:evaluatoragent - The agent performs qualitative evaluation based on the seed spec
- Results are advisory (no numerical scoring without Python core)
Example
User: /ouroboros:evaluate sess-abc-123
Evaluation Results
============================================================
Final Approval: APPROVED
Highest Stage Completed: 2
Stage 1: Mechanical Verification
[PASS] lint: No issues found
[PASS] build: Build successful
[PASS] test: 12/12 tests passing
Stage 2: Semantic Evaluation
Score: 0.85
AC Compliance: YES
Goal Alignment: 0.90
Drift Score: 0.08
๐ Done! Your implementation passes all checks. Optional: `ooo evolve` to iteratively refine