From thinking-frameworks-skills
Compares named alternatives against weighted criteria to rank options transparently via scoring, weighting methods, and sensitivity analysis. Use for vendor/tool/strategy selection or trade-offs.
npx claudepluginhub lyndonkl/claude --plugin thinking-frameworks-skillsThis skill uses the workspace's default tool permissions.
A decision matrix scores each option on each criterion, making subjective factors visible and comparable. It includes weighted criteria, sensitivity analysis, and clear recommendations.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
A decision matrix scores each option on each criterion, making subjective factors visible and comparable. It includes weighted criteria, sensitivity analysis, and clear recommendations.
Quick example:
| Option | Cost (30%) | Speed (25%) | Quality (45%) | Weighted Score |
|---|---|---|---|---|
| Option A | 8 (2.4) | 6 (1.5) | 9 (4.05) | 7.95 ← Winner |
| Option B | 6 (1.8) | 9 (2.25) | 7 (3.15) | 7.20 |
| Option C | 9 (2.7) | 4 (1.0) | 6 (2.7) | 6.40 |
Option A wins despite not being fastest or cheapest because quality matters most (45% weight).
Copy this checklist and track your progress:
Decision Matrix Progress:
- [ ] Step 1: Frame the decision and list alternatives
- [ ] Step 2: Identify and weight criteria
- [ ] Step 3: Score each alternative on each criterion
- [ ] Step 4: Calculate weighted scores and analyze results
- [ ] Step 5: Validate quality and deliver recommendation
Step 1: Frame the decision and list alternatives
Ask user for decision context (what are we choosing and why), list of alternatives (specific named options, not generic categories), constraints or dealbreakers (must-have requirements), and stakeholders (who needs to agree). Understanding must-haves helps filter options before scoring. See Framing Questions for clarification prompts.
Step 2: Identify and weight criteria
Collaborate with user to identify criteria (what factors matter for this decision), determine weights (which criteria matter most, as percentages summing to 100%), and validate coverage (do criteria capture all important trade-offs). If user is unsure about weighting → Use resources/template.md for weighting techniques. See Criterion Types for common patterns.
Step 3: Score each alternative on each criterion
For each option, score on each criterion using consistent scale (typically 1-10 where 10 = best). Ask user for scores or research objective data (cost, speed metrics) where available. Document assumptions and data sources. For complex scoring → See resources/methodology.md for calibration techniques.
Step 4: Calculate weighted scores and analyze results
Calculate weighted score for each option (sum of criterion score × weight). Rank options by total score. Identify close calls (options within 5% of each other). Check for sensitivity (would changing one weight flip the decision). See Sensitivity Analysis for interpretation guidance.
Step 5: Validate quality and deliver recommendation
Self-assess using resources/evaluators/rubric_decision_matrix.json (minimum score ≥ 3.5). Present decision-matrix.md file with clear recommendation, highlight key trade-offs revealed by analysis, note sensitivity to assumptions, and suggest next steps (gather more data on close calls, validate with stakeholders).
To clarify the decision:
To identify alternatives:
To surface must-haves:
Common categories for criteria (adapt to your decision):
Financial Criteria:
Performance Criteria:
Risk Criteria:
Strategic Criteria:
Operational Criteria:
Stakeholder Criteria:
Method 1: Direct Allocation (simplest) Stakeholders assign percentages totaling 100%. Quick but can be arbitrary.
Method 2: Pairwise Comparison (more rigorous) Compare each criterion pair: "Is cost more important than speed?" Build ranking, then assign weights.
Method 3: Must-Have vs Nice-to-Have (filters first) Separate absolute requirements (pass/fail) from weighted criteria. Only evaluate options that pass must-haves.
Method 4: Stakeholder Averaging (group decisions) Each stakeholder assigns weights independently, then average. Reveals divergence in priorities.
See resources/methodology.md for detailed facilitation techniques.
After calculating scores, check robustness:
1. Close calls: Options within 5-10% of winner → Need more data or second opinion 2. Dominant criteria: One criterion driving entire decision → Is weight too high? 3. Weight sensitivity: Would swapping two criterion weights flip the winner? → Decision is fragile 4. Score sensitivity: Would adjusting one score by ±1 point flip the winner? → Decision is sensitive to that data point
Red flags:
Technology Selection:
Vendor Evaluation:
Strategic Choices:
Hiring Decisions:
Feature Prioritization:
Skip decision matrix if:
Use instead:
Process:
Resources:
Deliverable: decision-matrix.md file with table, rationale, and recommendation