From rune
Applies structured reasoning frameworks to complex coding problems: 19 analytical tools, 12 bias detectors, 10 decomposition methods, 10 mental models, Cynefin classification, ethical checks.
npx claudepluginhub rune-kit/rune --plugin @rune/analyticsThis skill uses the workspace's default tool permissions.
Structured reasoning utility for problems that resist straightforward analysis. Receives a problem statement, detects cognitive biases, selects the appropriate analytical framework, applies it step-by-step with evidence, and returns ranked solutions with a communication structure. Stateless — no memory between calls.
Challenges assumptions, applies mental models like SWOT, first principles, and inversion, and structures reasoning to sharpen decisions and solve complex problems.
Selects strategic frameworks for problems, decisions, or challenges via exploratory dialogue across 6 key dimensions. Outputs selection briefs to .frameworks-output/.
Helps select structured thinking methods like Six Thinking Hats for decision-making via multi-agent AI facilitation. Useful for analyzing goals or challenges.
Share bugs, ideas, or general feedback.
Structured reasoning utility for problems that resist straightforward analysis. Receives a problem statement, detects cognitive biases, selects the appropriate analytical framework, applies it step-by-step with evidence, and returns ranked solutions with a communication structure. Stateless — no memory between calls.
Inspired by McKinsey problem-solving methodology and cognitive science research on decision-making errors.
None — pure L3 reasoning utility.
debug (L2): complex bugs that resist standard debuggingbrainstorm (L2): structured frameworks for creative explorationplan (L2): complex architecture decisions with many trade-offsba (L2): requirement analysis when scope is ambiguousproblem: string — clear statement of the problem to analyze
context: string — (optional) relevant background, constraints, symptoms observed
goal: string — (optional) desired outcome or success criteria
mode: string — (optional) "analyze" | "decide" | "decompose" | "communicate"
Read the problem and context inputs. Restate the problem in one sentence to confirm understanding.
Classify the problem type:
| Type | Signal Words | Primary Approach |
|---|---|---|
| Root cause / diagnostic | "why", "broken", "failing", "declining" | 5 Whys, Fishbone, Root Cause |
| Decision / choice | "should I", "choose", "compare", "vs" | Decision Frameworks (Step 3b) |
| Decomposition | "break down", "understand", "structure" | Decomposition Methods (Step 3c) |
| Creative / stuck | "stuck", "no ideas", "exhausted options" | SCAMPER, Collision-Zone, Inversion |
| Architecture / scale | "design", "architecture", "will it scale" | First Principles, Scale Game |
Before selecting a framework, classify the problem's complexity domain. This determines HOW MUCH analysis is warranted and WHICH class of frameworks applies.
| Domain | Signal | Framework Class | Analysis Depth |
|---|---|---|---|
| Clear (obvious) | Best practice exists, cause-effect obvious, "just do X" | Direct action — no framework needed | Minimal — act immediately |
| Complicated (expert analysis) | Cause-effect discoverable through analysis, multiple right answers exist | Analytical frameworks (5 Whys, Fishbone, SWOT, Weighted Matrix) | Moderate — structured analysis |
| Complex (emergent) | Cause-effect only visible in retrospect, no right answer — only better probes | Probe-sense-respond (Pre-Mortem, Systems Map, Sensitivity Analysis, PESTLE) | Deep — experiment and iterate |
| Chaotic (crisis) | No cause-effect, need to stabilize first | Act-sense-respond — triage, then analyze | Immediate — stabilize before analyzing |
| Confused (don't know which domain) | Can't classify → decompose until sub-problems land in a known domain | Decomposition first (Issue Tree, MECE) → re-classify each branch | Meta — decompose then classify |
Output: State the domain and justify in one sentence. If Confused, decompose before proceeding.
Why this matters: Applying Complicated-domain tools (deep analysis) to a Clear problem wastes effort. Applying Clear-domain tools ("just do X") to a Complex problem creates false confidence. Match the tool to the terrain.
Scan the problem statement and context for bias indicators. Check the top 6 most dangerous biases:
| Bias | Detection Question | Debiasing Strategy |
|---|---|---|
| Confirmation Bias | Have we actively sought evidence AGAINST our preferred option? Are we explaining away contradictory data? | Assign devil's advocate. Explicitly seek disconfirming evidence. Require equal analysis of all options. |
| Anchoring Effect | Would our evaluation change if we saw options in a different order? Is the first number/proposal dominating? | Generate evaluation criteria BEFORE seeing options. Score independently before group discussion. |
| Sunk Cost Fallacy | If we were starting fresh today with zero prior investment, would we still choose this? Are we justifying by pointing to past spend? | Evaluate each option as if starting fresh (zero-based). Separate past investment from forward-looking decision. |
| Status Quo Bias | Are we holding the current state to the SAME standard as alternatives? Would we actively choose the status quo if starting from scratch? | Explicitly include status quo as an option evaluated with same rigor. Calculate the cost of inaction. |
| Overconfidence | What is our confidence level, and what is it based on? Have we been right about similar predictions before? | Use pre-mortem to stress-test. Track calibration. Seek outside perspectives. |
| Planning Fallacy | Are our estimates based on best-case assumptions? Have similar projects in the past taken longer or cost more? | Use reference class forecasting — compare to actual outcomes of similar past efforts rather than bottom-up estimates. |
Additional biases to check when relevant:
Steel Manning (apply when evaluating competing options): Before dismissing any option, construct the STRONGEST possible version of the argument for it. If you can't articulate why a smart, informed person would choose it, you haven't understood it yet. Steel Manning prevents strawman dismissals and forces genuine evaluation.
Output: List 2-3 biases most likely to affect THIS specific problem, with their debiasing strategy. If comparing options, include a steel-manned case for the option you're least inclined toward. Weave these warnings into the analysis.
Choose the framework based on what is unknown about the problem:
| Situation | Framework |
|---|---|
| Root cause unknown — symptoms clear | 5 Whys |
| Multiple potential causes from different domains | Fishbone (Ishikawa) |
| Standard assumptions need challenging | First Principles |
| Creative options needed for known problem | SCAMPER |
| Must prioritize among known solutions | Impact Matrix |
| Conventional approaches exhausted, need breakthrough | Collision-Zone Thinking |
| Feeling forced into "the only way" | Inversion Exercise |
| Same pattern appearing in 3+ places | Meta-Pattern Recognition |
| Complexity spiraling, growing special cases | Simplification Cascades |
| Unsure if approach survives production scale | Scale Game |
| High-stakes irreversible decision — need to find blind spots | Pre-Mortem |
| Need to determine how much analysis effort is warranted | Reversibility Filter |
| Quantifiable outcomes with estimable probabilities | Expected Value Calculation |
| Key assumptions uncertain, need to know what flips the decision | Sensitivity Analysis |
| Need holistic internal + external assessment of a project/product/strategy | SWOT Analysis |
| Decision depends on macro-environment factors beyond your control | PESTLE Analysis |
| Competitive landscape unclear, need to assess market position | Porter's Five Forces |
| Need a rough estimate with very little data | Fermi Estimation |
| Problem involves ethical trade-offs or stakeholder harm | Ethical Reasoning (→ Step 5.5) |
State which framework was selected and why.
SWOT Analysis (holistic assessment):
PESTLE Analysis (macro-environment scan): When the problem is influenced by forces beyond the project/org:
| Factor | Key Questions |
|---|---|
| Political | Government policy, regulation changes, political stability, trade restrictions? |
| Economic | Market conditions, inflation, exchange rates, funding climate, customer spending? |
| Social | Demographics, cultural trends, user behavior shifts, workforce expectations? |
| Technological | New tech, disruption risk, automation, platform shifts, AI impact? |
| Legal | Compliance requirements, IP, data privacy (GDPR/CCPA), licensing, liability? |
| Environmental | Sustainability expectations, carbon footprint, resource scarcity, ESG pressure? |
For each factor: rate impact (high/medium/low) and timeline (imminent/near-term/long-term). Focus analysis on high-impact factors only.
Porter's Five Forces (competitive position):
Fermi Estimation (order-of-magnitude reasoning): When data is scarce but a rough estimate is needed:
When the problem is a decision/choice, use these specialized frameworks:
Reversibility Filter (always apply first):
Weighted Criteria Matrix (multi-option comparison):
Pros-Cons-Fixes (binary or few-option, quick):
Pre-Mortem (high-stakes, irreversible):
Expected Value (quantifiable outcomes):
Regret Minimization (life-scale or career-scale decisions):
When the problem needs structuring before analysis:
| Method | When to Use | Pattern |
|---|---|---|
| Issue Tree | Don't have a hypothesis yet, exploring | Root Question → Sub-questions (why/what) → deeper |
| Hypothesis Tree | Have domain expertise, need speed | Hypothesis → Conditions that must be true → Evidence needed |
| Profitability Tree | Business performance problem | Profit → Revenue (Price × Volume) → Costs (Fixed + Variable) |
| Process Flow | Operational/efficiency problem | Step 1 → Step 2 → ... → find bottleneck |
| Systems Map | Complex with feedback loops | Variables → causal links (+/-) → reinforcing/balancing loops |
| Customer Journey | User/customer problem | Awareness → Consideration → Purchase → Experience → Retention |
All decompositions MUST pass the MECE test:
Execute the selected framework with discipline. For each framework, follow the steps defined in Step 3a/3b/3c.
At each step, apply the bias debiasing strategies identified in Step 2.
Cross-check the framework output against relevant mental models:
| Model | Core Question | When It Helps |
|---|---|---|
| Second-Order Thinking | "And then what?" — consequences of consequences | Decisions with delayed effects |
| Bayesian Updating | How should we update our beliefs given this new evidence? | When new data arrives during analysis |
| Margin of Safety | What buffer do we need for things going wrong? | Planning timelines, budgets, capacity |
| Opportunity Cost | What's the best alternative we're giving up? | Resource allocation, project prioritization |
| Occam's Razor | Among competing explanations, prefer the simplest | Multiple possible root causes |
| Leverage Points | Where does small effort produce large effect? | System redesign, process improvement |
| Hanlon's Razor | Never attribute to malice what can be explained by incompetence or misaligned incentives | Organizational problems, team conflicts |
| Regression to the Mean | Is this extreme result likely to revert to average? | After exceptional performance (good or bad) |
| Dialectical Thinking | Thesis + Antithesis → can we synthesize a higher-order solution? | Two opposing valid positions, binary choice feels forced |
| Fermi Estimation | Can we get a rough order-of-magnitude estimate to sanity-check? | Claims, estimates, or projections that feel off |
Apply 1-2 most relevant models. State which and why.
Run this check when the problem involves: user data, automation replacing human judgment, resource allocation affecting people, public-facing decisions, or stakeholder trade-offs.
| Lens | Core Question |
|---|---|
| Harm | Who could be harmed by each option? How severe? How reversible? |
| Fairness | Does this option disadvantage any group disproportionately? |
| Transparency | Would we be comfortable if our reasoning was public? |
| Autonomy | Does this preserve user choice, or does it decide for them? |
| Long-term trust | Will this erode trust with users/team/community over time? |
This is NOT a gate — it produces warnings, not blocks. If an ethical concern is identified, note it alongside the solution in Step 6 so the decision-maker can weigh it.
Skip this step for purely technical problems with no stakeholder impact (e.g., "which sorting algorithm").
From the framework output, derive 2-3 actionable solutions. For each:
Rank solutions by impact/effort ratio.
Choose how to present the analysis based on audience:
| Audience | Pattern | Format |
|---|---|---|
| Executive / senior | Pyramid Principle | Lead with recommendation → support with 3 arguments → evidence |
| Mixed / unfamiliar | SCR | Situation (context) → Complication (tension) → Resolution (recommendation) |
| Technical / peers | Day-1 Answer | State best hypothesis → list evidence for/against → confidence level |
| Quick update | BLUF | Bottom Line Up Front → background → details → action required |
Structure the output report using the selected pattern.
## Analysis: [Problem Statement]
- **Type**: [root cause / decision / decomposition / creative / architecture]
- **Domain**: [Clear / Complicated / Complex / Chaotic / Confused] — [one-line justification]
- **Framework**: [chosen framework and reason]
- **Confidence**: high | medium | low
### Bias Warnings
- ⚠️ [Bias 1]: [how it might affect this analysis] → [debiasing action taken]
- ⚠️ [Bias 2]: [how it might affect this analysis] → [debiasing action taken]
### Reasoning Chain
1. [step with evidence or reasoning]
2. [step with evidence or reasoning]
3. [step with evidence or reasoning]
...
### Mental Model Cross-Check
- [Model applied]: [insight gained]
### Root Cause / Core Finding
[what the framework reveals as the fundamental issue or conclusion]
### Recommended Solutions (ranked)
1. **[Solution Name]** — Impact: high/medium/low | Effort: high/medium/low
[concrete description of what to do]
⚠️ Bias risk: [which bias might make us over/under-value this]
2. **[Solution Name]** — Impact: high/medium/low | Effort: high/medium/low
[concrete description of what to do]
3. **[Solution Name]** — Impact: high/medium/low | Effort: high/medium/low
[concrete description of what to do]
### Next Action
[single most important immediate step]
| Failure Mode | Severity | Mitigation |
|---|---|---|
| Skipping bias check and jumping to framework | CRITICAL | HARD-GATE: Step 2 is mandatory — biases ARE the value-add |
| Skipping the framework and jumping to solutions | CRITICAL | Solutions without structured analysis are guesses |
| Proceeding with underspecified problem | HIGH | Step 1: restate in one sentence — if ambiguous, state interpretation |
| Producing more than 3 solutions | MEDIUM | Max 3 ranked — prioritize quality over quantity |
| Framework mismatch (5 Whys for a creative problem) | MEDIUM | Use selection table — match framework to "what is unknown" |
| Weighted Matrix with > 5 criteria | MEDIUM | Choice overload — max 5 criteria, focus on what matters |
| Pre-Mortem without debiasing strategies | MEDIUM | Pre-Mortem reveals risks — MUST include mitigation plans |
| Decomposition failing MECE test | HIGH | Every branch must be ME (no overlap) and CE (no gaps) |
| Ignoring second-order effects in recommendations | MEDIUM | Apply Second-Order Thinking: "and then what?" |
| Presenting analysis without communication structure | LOW | Step 7: match output pattern to audience |
| Using Complicated-domain tools on a Complex problem | HIGH | Step 1.5 Cynefin: Complex → probe-sense-respond, not analyze-plan-execute |
| Strawmanning the least-favored option | MEDIUM | Steel Manning: build strongest case for option you dislike before dismissing |
| Running full PESTLE on a purely technical problem | LOW | PESTLE is for macro-environment — skip for algorithm/implementation choices |
| Skipping ethics check on user-facing decisions | MEDIUM | Step 5.5: lightweight check — warnings not gates, but don't skip for stakeholder-affecting decisions |
~500-1500 tokens input, ~800-1500 tokens output. Sonnet for reasoning quality. Opus recommended for high-stakes irreversible decisions.