From agent-almanac
Evaluates competing approaches via independent assessments, reasoning-out-loud advocacy, confidence thresholds, and deadlock resolution for coherent decisions. Use when selecting among multiple options or justifying high-stakes choices.
npx claudepluginhub pjt222/agent-almanacThis skill uses the workspace's default tool permissions.
---
Evaluates decisions via stance rotation (neutral, advocate, critic perspectives), synthesizes confidence-rated recommendation with next steps. For architectural choices, tech options, build-vs-buy, tradeoffs.
Spawns parallel advocate subagents to debate decision options adversarially, rebut arguments, and gather evidence before judge renders verdict. For robust choices.
Synthesizes options from explore phase into technical/conceptual decision matrices, forces prioritization of trade-offs, challenges choices, and commits to reversible bets.
Share bugs, ideas, or general feedback.
Evaluate competing approaches through independent assessment, explicit reasoning-out-loud advocacy, confidence-calibrated commitment thresholds, and structured deadlock resolution — producing coherent decisions from multiple reasoning paths.
forage-solutions has identified multiple valid approaches and a selection must be madeforage-solutions)Assess each approach on its own merits before comparing them. The critical rule: do not let the assessment of approach A bias the assessment of approach B.
For each approach, evaluate independently:
Approach Evaluation Template:
┌────────────────────────┬──────────────────────────────────────────┐
│ Dimension │ Assessment │
├────────────────────────┼──────────────────────────────────────────┤
│ Approach name │ │
├────────────────────────┼──────────────────────────────────────────┤
│ Core mechanism │ How does this approach solve the problem? │
├────────────────────────┼──────────────────────────────────────────┤
│ Strengths (2-3) │ What does this approach do well? │
├────────────────────────┼──────────────────────────────────────────┤
│ Risks (2-3) │ What could go wrong? What is assumed? │
├────────────────────────┼──────────────────────────────────────────┤
│ Evidence quality │ How well-supported is this approach? │
│ │ (verified / inferred / speculated) │
├────────────────────────┼──────────────────────────────────────────┤
│ Quality score (0-100) │ Overall assessment │
├────────────────────────┼──────────────────────────────────────────┤
│ Confidence (0-100) │ How confident in this assessment? │
└────────────────────────┴──────────────────────────────────────────┘
Fill this out for each approach separately. Do not write a comparison until all individual evaluations are complete.
Expected: Independent evaluations where each approach is assessed on its own terms. The evaluation of approach B does not reference approach A. Quality scores reflect genuine assessment, not ranking.
On failure: If the evaluations are contaminated (you find yourself writing "better than A" while assessing B), reset. Assess A completely, then clear the framing and assess B from scratch. If the scores are all identical, the evaluation dimensions are too coarse — add domain-specific criteria.
Advocate for each approach proportionally to its quality. This is the AI equivalent of the bee waggle dance: making implicit reasoning explicit and public.
The purpose of reasoning-out-loud is to make the decision auditable — to yourself and to the user. If the reasoning cannot be articulated, the assessment is shallower than the score suggests.
Expected: Explicit reasoning for each approach that would be persuasive to a neutral observer. Cross-inspection reveals at least one consideration that was initially overlooked.
On failure: If advocacy feels perfunctory (going through motions), the approaches may not be genuinely different — they may be variations of the same idea. Check: do the approaches differ in mechanism, or only in implementation detail? If the latter, the decision may not matter much — pick either and move on.
Set the confidence threshold required to commit, calibrated to the decision's stakes.
Confidence Thresholds by Stakes:
┌─────────────────────┬───────────┬──────────────────────────────────┐
│ Decision Type │ Threshold │ Rationale │
├─────────────────────┼───────────┼──────────────────────────────────┤
│ Easily reversible │ 60% │ Cost of trying and reverting is │
│ (can undo) │ │ low. Speed matters more than │
│ │ │ certainty │
├─────────────────────┼───────────┼──────────────────────────────────┤
│ Moderate stakes │ 75% │ Reverting has cost but is │
│ (costly to reverse) │ │ possible. Worth investing in │
│ │ │ evaluation │
├─────────────────────┼───────────┼──────────────────────────────────┤
│ Irreversible or │ 90% │ Cannot undo. Must be confident. │
│ high-stakes │ │ If threshold not met, gather │
│ │ │ more information before deciding │
└─────────────────────┴───────────┴──────────────────────────────────┘
Expected: A clear commitment moment with stated reasoning. The decision is made at an appropriate confidence level for its stakes.
On failure: If the threshold is never met (can't reach 90% on an irreversible decision), ask: is the decision truly irreversible? Can it be decomposed into a reversible test phase + an irreversible commit? Most apparently irreversible decisions can be staged. If staging is impossible, communicate the uncertainty to the user and ask for guidance.
When two or more approaches have similar scores and the quorum threshold is not met for any single one.
Deadlock Resolution:
┌────────────────────────┬──────────────────────────────────────────┐
│ Deadlock Type │ Resolution │
├────────────────────────┼──────────────────────────────────────────┤
│ Genuine tie │ The approaches are equivalent. Pick one │
│ (scores within 5%) │ and commit. The cost of deliberating │
│ │ exceeds the cost of picking the "wrong" │
│ │ equivalent option. Flip a coin mentally │
├────────────────────────┼──────────────────────────────────────────┤
│ Information deficit │ The tie exists because evaluation is │
│ (scores uncertain) │ incomplete. Invest one more specific │
│ │ investigation — a targeted file read, a │
│ │ quick test — then re-score │
├────────────────────────┼──────────────────────────────────────────┤
│ Oscillation │ Scoring keeps flip-flopping depending on │
│ (scores keep changing) │ which dimension gets attention. Time-box:│
│ │ set a timer, evaluate once more, commit │
│ │ to the result regardless │
├────────────────────────┼──────────────────────────────────────────┤
│ Approach merge │ The best parts of A and B can be │
│ (compatible strengths) │ combined. Check for compatibility. If │
│ │ merge is coherent, use it. If forced, │
│ │ don't — pick one │
└────────────────────────┴──────────────────────────────────────────┘
Expected: Deadlock resolved through the appropriate mechanism. The resolution is decisive — no lingering doubt that undermines execution.
On failure: If the deadlock persists through all resolution strategies, the decision may be premature. Ask the user: "I see two equally strong approaches: [A] and [B]. [Brief case for each.] Which aligns better with your priorities?" Delegating a genuine tie to the user is not a failure — it is acknowledging that the decision depends on values the AI cannot infer.
After committing to a decision, evaluate whether the process produced genuine coherence or just a decision.
Expected: A brief quality check that either confirms the decision or identifies it as weak. If weak, return to the appropriate earlier step rather than proceeding on shaky ground.
On failure: If the quality check reveals that the decision was preference-based rather than evidence-based, acknowledge it honestly. Sometimes preference is all that is available — but it should be labeled as such, not dressed up as analysis.
build-consensus — the multi-agent consensus model that this skill adapts to single-agent reasoningforage-solutions — scouts the solution space that coherence evaluates; typically precedes this skillcoordinate-reasoning — manages information flow during multi-path evaluationcenter — establishes the balanced baseline needed for unbiased evaluationmeditate — clears assumptions between evaluating different approaches