Methodology for evaluating options, comparing technologies, and making evidence-based decisions between alternatives. Use when the user needs to choose between competing approaches, libraries, or architectures with a structured comparison. Triggers when user says "compare these options", "which approach should we use", "evaluate alternatives", "help me decide between X and Y", "technology comparison", or wants a structured pros/cons/recommendation analysis.
From project-docsnpx claudepluginhub ichabodcole/project-docs-scaffold-template --plugin project-docsThis skill uses the workspace's default tool permissions.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Retrieves current documentation, API references, and code examples for libraries, frameworks, SDKs, CLIs, and services via Context7 CLI. Ideal for API syntax, configs, migrations, and setup queries.
Uses ctx7 CLI to fetch current library docs, manage AI coding skills (install/search/generate), and configure Context7 MCP for AI editors.
A structured approach to evaluating options and making evidence-based decisions between alternatives. Use this methodology when the core question is "which option should we choose?" rather than "what happened?" or "how does this work?"
Use this methodology when you need to:
Key indicator: There are multiple viable options and you need a structured way to decide between them.
Not for: Debugging or root cause analysis (use investigation-methodology), finding what's missing from a spec (use gap-analysis), or understanding how something works (use general research).
Before comparing options, articulate what you're deciding:
Stopping point: You can state the decision, its drivers, and constraints in 2-3 sentences.
Cast a wide net to identify all viable options:
Sources to check:
Stopping point: You have 2-4 finalists and can explain why others were eliminated.
Evaluate each finalist against your decision drivers:
For each option, document:
Stopping point: You can articulate the genuine trade-offs between finalists.
Synthesize findings into a decision:
Comparison Matrix Format:
| Criteria | Weight | Option A | Option B | Option C |
|---|---|---|---|---|
| [Driver 1] | High | [Rating] | [Rating] | [Rating] |
| [Driver 2] | Medium | [Rating] | [Rating] | [Rating] |
| [Driver 3] | Low | [Rating] | [Rating] | [Rating] |
| Effort | — | [L/M/H] | [L/M/H] | [L/M/H] |
| Risk | — | [L/M/H] | [L/M/H] | [L/M/H] |
| Overall Fit | — | [Rating] | [Rating] | [Rating] |
Stopping point: You have a clear recommendation with evidence-backed rationale.
Structure evaluative research outputs as:
# Investigation: [Decision Topic]
## Summary
[2-3 sentences: what decision was needed, what was evaluated, what's
recommended]
## Decision Context
- **Decision**: [Clear question being answered]
- **Drivers**: [What matters most, in priority order]
- **Constraints**: [Hard requirements]
- **Current state**: [Relevant existing architecture/tools]
## Options Evaluated
### Option 1: [Name]
**What it is**: [Brief description] **Strengths**:
- [Evidence-backed strength]
**Weaknesses**:
- [Evidence-backed weakness]
**Effort**: [Low/Medium/High] — [rationale] **Risk**: [Low/Medium/High] —
[rationale]
[Repeat for each option]
### Options Eliminated Early
- [Option X]: Eliminated because [reason]
## Comparison
| Criteria | Option 1 | Option 2 | Option 3 |
| -------- | -------- | -------- | -------- |
| ... | ... | ... | ... |
### Key Trade-offs
[The 2-3 most significant trade-offs between the top options]
## Recommendation
**Primary**: [Option name] — [Confidence: High/Medium/Low]
**Rationale**: [Why this option, tied to decision drivers and evidence]
**When to reconsider**: [Under what circumstances would you choose differently?]
## Next Steps
1. [Immediate action]
2. [Validation step]
3. [Follow-up if needed]
## References
- [Links to docs, benchmarks, repos evaluated]
Evidence over opinion: Every rating in the comparison matrix should trace back to something concrete — documentation, benchmarks, code review, or real-world usage data.
Honest about uncertainty: If you can't properly evaluate a criterion, say so. "Unable to assess without prototyping" is better than a guess.
Decision drivers first: Start from what matters, not from what's easy to compare. Don't let available data drive the criteria.
Disconfirm actively: For your recommended option, try hardest to find its weaknesses. For the runner-up, try hardest to find its strengths.
State the trade-off clearly: The best decisions acknowledge what you're giving up, not just what you're gaining.
Before concluding evaluative research:
Quick evaluation (clear winner, limited scope):
Standard evaluation (multiple good options):
Strategic evaluation (high-stakes, long-term impact):