By jamon8888
Structured decision protocol. Forces the questions you would skip, the perspectives you would miss, and the math you would approximate. Based on Kahneman (MAP), Klein (pre-mortem), Tetlock (calibration).
npx claudepluginhub jamon8888/cc-suite --plugin SentinelGenerates a personal cognitive bias profile from the decision ledger. Analyzes patterns in calibration errors, retrospective assessments, and recurring bias flags to identify the user's most active blind spots. Output is a personal bias fingerprint that adjusts future triage weights — so Sentinel pre-loads the biases most relevant to THIS user, not just the generic decision-type defaults. Requires minimum 5 resolved decisions in the ledger.
Generate diverse perspectives BEFORE evaluating. Used when the user is stuck in a narrow frame. 5-7 perspectives from different stakeholders, timeframes, or value systems. Then optionally feeds into /sentinel.
Collective decision protocol. Generates individualized pre-meeting scoring sheets, structures the meeting agenda by divergence, and facilitates group discussion to surface unique information and prevent groupthink. This is NOT /sentinel run by multiple people — it is a fundamentally different protocol that treats group dynamics as the primary risk variable.
Post-decision hygiene. Called 3-6 months after a recorded decision to review outcomes while correcting for the biases that contaminate retrospective judgment: Hindsight Bias (92), Choice-Supportive Bias (91), Outcome Bias (14), Self-Serving Bias (28), and Rosy Retrospection (94). The protocol forces evaluation of decision QUALITY separately from outcome QUALITY — these are not the same thing, and conflating them degrades future judgment.
Challenge the question itself before answering it. Sometimes the problem isn't the answer - it's the question. Uses 6 reframing techniques to escape framing bias.
Review past predictions and measure calibration. Calculates Brier scores, identifies patterns, suggests improvements.
First-time walkthrough. Explains what Sentinel does and doesn't do.
Main decision analysis command. Routes to the right protocol based on stakes and complexity. Always starts with structure (MAP), adds layers as complexity increases. Questions and pre-mortem are complements, not the core.
Tracks predictions and measures calibration over time. Based on Tetlock's superforecasting research. The ONLY way to improve judgment long-term is feedback on past predictions. Manages the decision ledger, calculates Brier scores, identifies patterns.
Pre-mortem and counter-argument generation. Imagines the plan has failed and works backward to identify failure modes. Based on Klein (2007) and Mitchell et al. (1989) — prospective hindsight increases risk identification in group settings. In solo use, the primary value is structural: it forces explicit failure-mode enumeration rather than relying on intuitive optimism. Also generates the steelman of the losing option. Activated on STANDARD and FULL protocols.
Structures collective decision-making to prevent group-specific biases. Addresses Shared Information Bias (66), Groupshift (71), Courtesy Bias (89), Bandwagon Effect (87), and Groupthink (15) — biases that emerge from group dynamics and cannot be corrected by individual analysis. Implements the two-phase independent-scoring protocol: collect individual MAP scores BEFORE group discussion, quantify noise, then structure the meeting by divergence rather than consensus. Activated by /sentinel-group or when triage detects group=true.
Checks the logical structure of arguments. Not bias detection - structural validity. Identifies hidden premises, false dilemmas, unsupported claims. Proposes logically valid reformulations.
Measures judgment variance for MAP scores or estimates. Use when 3+ scores or estimates need noise quantification. Activated on FULL protocol when options diverge by more than 2 MAP points, and for /sentinel-group.
Asks the verification questions that the decision-maker would skip under pressure. Does NOT diagnose biases - poses the questions that surface them. Based on Kahneman's decision hygiene (Noise, ch.19): "fact-based questions that are as independent of each other as possible." Each question targets a specific reasoning trap but is phrased as a genuine inquiry, not a label.
Compares your plan against what actually happens to similar plans. Provides base rate ESTIMATES with explicit confidence and tells you WHERE TO VERIFY them. Does not pretend to have data it doesn't have. Based on Kahneman's reference class forecasting.
Tests whether the proposed solution is proportional to the actual scale of the problem. Addresses Scope Neglect (62), Additive Bias (57), Quantification Bias (73), and Illusion of Validity (46). Uses order-of-magnitude perturbation: if the problem were 10x larger or 10x smaller, would the solution scale appropriately? Also checks for subtraction-blindness — solutions that add complexity when removing something would be better. Lightweight agent, activated on STANDARD and FULL protocols when solutions involve resource allocation, budget, team size, or timeline.
Breaks a decision into independent dimensions, scores each one separately, and distinguishes score from confidence. This is the MAP (Mediating Assessments Protocol) from Kahneman 2021 - the most empirically supported technique in the plugin. It works by STRUCTURE (choice architecture), not by telling you what's wrong. Kahneman: "treat options like candidates - evaluate aspects independently, fact-based, delay intuition."
Tests temporal coherence of preferences. Addresses Hyperbolic Discounting, Projection Bias, and End-of-History Illusion. Also catches Plan Continuation Bias. Activated on FULL protocol and decisions with commitment horizon exceeding 6 months.
Core decision analysis skill. Trigger whenever the user: describes a decision, choice, or dilemma in natural language ("I need to choose between...", "we're torn between...", "should we...", "I'm not sure whether to...", "help me decide"); faces a trade-off between options; asks to triage, structure, or analyse a decision; asks for noise audit, calibration check, or bias catalog lookup; uses phrases like "decision hygiene" or "triage this decision". Trigger proactively — do not wait for technical vocabulary. Also activates for: structured scoring of options, judgment variance measurement, prediction tracking, heuristic pattern queries.
Hiring and talent evaluation decision hygiene. Trigger when the user: is deciding whether to hire someone ("on hésite à recruter", "we're torn between candidates", "should we extend an offer", "help me evaluate this candidate"); is running or reviewing interviews; asks about interview bias, candidate scoring, or evaluation consistency; uses phrases like "hiring decision", "evaluate candidate", "recruitment", "talent evaluation", "interview calibration", or "finalist". Also activates for: structured candidate scoring, hiring noise audits, and committee calibration.
M&A and deal decision hygiene. Trigger when the user: is evaluating an acquisition, merger, or investment ("on a reçu une offre de rachat", "should we acquire this", "we received a term sheet", "help me evaluate this deal", "is this company worth buying"); is running due diligence; asks about deal valuation, integration risk, or investment committee preparation; uses phrases like "acquisition", "merger", "due diligence", "target evaluation", "investment committee", "IC memo", "LOI", "term sheet", "synergies", "valuation", "integration", or "deal". Also activates for M&A noise audits, legal DD review, and pre-mortem on integration plans.
Product and roadmap decision hygiene. Trigger when the user: is deciding what to build or prioritise ("on doit prioriser le backlog", "help me decide what to build next", "which features do we tackle first", "should we build or buy this"); is reviewing a product roadmap; asks about sprint planning, feature scoring, or backlog grooming; uses phrases like "product roadmap", "feature prioritization", "sprint planning", "backlog grooming", "build vs buy", "RICE", "MoSCoW". Also activates for: product noise audits, roadmap pre-mortem, sprint calibration, and feature scorecard generation.
Post-decision and retrospective hygiene. Trigger when the user: wants to understand why something succeeded or failed ("pourquoi notre lancement a-t-il raté", "why did the project fail", "what went wrong last quarter", "let's review what happened"); is running a post-mortem, retrospective, or after-action review; asks about lessons learned or debrief; uses phrases like "post-mortem", "postmortem", "retrospective", "retro", "what went wrong", "lessons learned", "debrief", "after-action", "review what happened", "why did it fail", or "why did it succeed". Also activates for: postmortem scorecards, retrospective bias audits (hindsight, outcome bias, self-serving bias).
Marketing and communication strategy decision hygiene. Trigger when the user: is building or reviewing a marketing plan ("on prépare notre plan de communication", "which channel should we invest in", "help us decide on our go-to-market", "should we increase our media budget"); is evaluating a campaign or initiative; asks about channel mix, attribution, or budget allocation; uses phrases like "marketing strategy", "campaign planning", "go-to-market", "media budget", "channel mix", "GTM", "campaign brief", or "comms plan". Also activates for: campaign pre-mortems, budget noise audits, strategic plan review, attribution bias checks, and zero-based budget challenges.
##HACIENDA COWORK AND CLAUDE CODE PLUGIN
Semantic search for Claude Code conversations. Remember past discussions, decisions, and patterns.
Upstash Context7 MCP server for up-to-date documentation lookup. Pull version-specific documentation and code examples directly from source repositories into your LLM context.
Comprehensive startup business analysis with market sizing (TAM/SAM/SOM), financial modeling, team planning, and strategic research
Comprehensive skill pack with 66 specialized skills for full-stack developers: 12 language experts (Python, TypeScript, Go, Rust, C++, Swift, Kotlin, C#, PHP, Java, SQL, JavaScript), 10 backend frameworks, 6 frontend/mobile, plus infrastructure, DevOps, security, and testing. Features progressive disclosure architecture for 50% faster loading.
Comprehensive PR review agents specializing in comments, tests, error handling, type design, code quality, and code simplification
Uses power tools
Uses Bash, Write, or Edit tools
Share bugs, ideas, or general feedback.
Permanent coding companion for Claude Code — survives any update. MCP-based terminal pet with ASCII art, stats, reactions, and personality.
Own this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claimOwn this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claim