From decision-evaluation-framework
Read all per-framework analysis fragments in a decision workspace and produce a cross-framework synthesis — agreement/disagreement matrix, score table, weighted verdict, and the open questions any individual framework couldn't resolve. Run after /decision:analyze, or invoke directly to re-synthesize.
npx claudepluginhub danielrosehill/claude-code-plugins --plugin decision-evaluation-frameworkThis skill uses the workspace's default tool permissions.
Aggregates per-framework fragments into one document.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Analyzes multiple pages for keyword overlap, SEO cannibalization risks, and content duplication. Suggests differentiation, consolidation, and resolution strategies when reviewing similar content.
Share bugs, ideas, or general feedback.
Aggregates per-framework fragments into one document.
A workspace folder produced by /decision:analyze, with decision.md and a populated fragments/ directory. The user may pass the path or the slug; if neither, list recent workspaces and ask.
Read decision.md (for context) and every file in fragments/.
Extract the score (0-100) from each fragment. Each framework guide instructs the subagent to end with ## Score: <N>.
Build the score table:
| Framework | Score | One-line verdict |
|---|
Build the agreement matrix. Identify points where frameworks converge (e.g., "5/7 frameworks flag stakeholder-A as a blocker") and where they diverge (e.g., "MCDA prefers Option A; pre-mortem prefers Option B"). Convergence is signal; divergence is where the real work is.
Surface unresolved questions — what did multiple frameworks list as "would need to know X to decide"? These are the things to investigate before committing.
Weighted verdict. Compute a weighted average of the framework scores. By default, weight each framework equally. If the user has indicated a bundle bias (e.g., "personal" bundle suggests they care about regret-minimization more), reflect that. Always show both the equal-weighted and (if applicable) bias-weighted result.
Write synthesis.md in the workspace root. Structure:
# Synthesis: <Decision title>
## Score table
| Framework | Score | Verdict |
## Convergence
<where frameworks agree>
## Divergence
<where they disagree, and why>
## Unresolved questions
<items that need investigation before committing>
## Weighted verdict
- Equal-weighted: <score>/100, leaning <option/direction>
- <Other weightings if applicable>
## Recommendation
<one paragraph: what the integrated picture says, and what the user should do next>
## Confidence
<how robust the recommendation is — wide framework agreement vs split>
Report the synthesis path back to the user.