From darkroom
Compares architectural and tech approaches using parallel oracle evaluators, weighted criteria scoring, comparison matrix, and ADR output. For tech selection, trade-offs, and multi-option evaluations.
npx claudepluginhub darkroomengineering/cc-settingsThis skill uses the workspace's default tool permissions.
Structured approach to comparing multiple solutions using parallel evaluation, weighted scoring, and ADR output.
Builds weighted decision matrices, analyzes trade-offs, and generates ADRs for architectural, technical, and process decisions like database selection or framework choice.
Analyzes technical decisions like library selections, architecture patterns, and stack choices via multi-agent research, tradeoffs, and two-part executive reports.
Evaluates technology alternatives against criteria like fit, complexity, team familiarity, scalability, and security; scores options and documents ADRs.
Share bugs, ideas, or general feedback.
Structured approach to comparing multiple solutions using parallel evaluation, weighted scoring, and ADR output.
Establish evaluation criteria with weights (must sum to 100):
| Criterion | Weight | Description |
|------------------|--------|------------------------------------|
| Performance | 25 | Runtime speed, bundle size |
| DX | 20 | Developer experience, API quality |
| Maintainability | 20 | Long-term code health |
| Ecosystem | 15 | Community, plugins, docs |
| Migration Cost | 10 | Effort to adopt |
| Type Safety | 10 | TypeScript integration quality |
Preset Criteria Sets:
Spawn one oracle agent per approach in a SINGLE message:
Agent(oracle, "Evaluate [Approach A] against criteria: [criteria list with weights]. Score 1-10 per criterion. Include concrete examples, code samples, and evidence.")
Agent(oracle, "Evaluate [Approach B] against criteria: [criteria list with weights]. Score 1-10 per criterion. Include concrete examples, code samples, and evidence.")
Agent(oracle, "Evaluate [Approach C] against criteria: [criteria list with weights]. Score 1-10 per criterion. Include concrete examples, code samples, and evidence.")
Each evaluator must return:
Build the comparison matrix from evaluator responses.
Produce the final output in ADR format.
## Comparison: [Decision Title]
| Criterion (Weight) | Option A | Option B | Option C |
|-------------------------|----------|----------|----------|
| Performance (25) | 8 (200) | 6 (150) | 7 (175) |
| DX (20) | 9 (180) | 7 (140) | 6 (120) |
| Maintainability (20) | 7 (140) | 8 (160) | 5 (100) |
| Ecosystem (15) | 8 (120) | 9 (135) | 4 (60) |
| Migration Cost (10) | 6 (60) | 8 (80) | 3 (30) |
| Type Safety (10) | 9 (90) | 7 (70) | 8 (80) |
| **TOTAL** | **790** | **735** | **565** |
Score format: raw (weighted) where weighted = raw * weight
This extends the base ADR template from
agents/planner.mdwith scoring matrix and detailed risk sections.
# ADR-NNN: [Decision Title]
## Status
Proposed
## Context
[Why this decision is needed. What problem we're solving.]
## Options Considered
1. **Option A** - [one-line summary]
2. **Option B** - [one-line summary]
3. **Option C** - [one-line summary]
## Decision
We will use **Option A** because [primary reasons].
## Scoring Summary
[Comparison matrix from above]
## Consequences
### Positive
- [benefit 1]
- [benefit 2]
### Negative
- [trade-off 1]
- [trade-off 2]
### Risks
- [risk 1] → Mitigation: [approach]
## References
- [relevant links, docs, benchmarks]
User: "Which state management should we use for this Next.js app?"
→ Define criteria (Frontend preset)
→ Agent(oracle, "Evaluate Zustand against frontend criteria...")
+ Agent(oracle, "Evaluate Jotai against frontend criteria...")
+ Agent(oracle, "Evaluate Redux Toolkit against frontend criteria...")
→ Build comparison matrix
→ Output ADR recommendation
User: "Compare Turborepo vs Nx for our monorepo"
→ Define criteria (Infrastructure preset)
→ Agent(oracle, "Evaluate Turborepo...") + Agent(oracle, "Evaluate Nx...")
→ Build comparison matrix
→ Output ADR recommendation