From lattice
Surfaces ambiguous decisions in code generation, design, and review with structured options, pros/cons, and reasoning to enable collaborative resolution instead of silent assumptions.
npx claudepluginhub techygarg/lattice --plugin latticeThis skill uses the workspace's default tool permissions.
AI resolve ambiguity silent. User never know decision made. Silent micro-assumption make code feel "off". Undo woven assumption cost more than upfront choice.
Handles decision-making tasks in Claude Code. Auto-activates on 'decision.md' matches or via direct /Decision invocation for evaluating code options and trade-offs.
Presents structured options with pros, cons, effort, risk, and reversibility trade-offs for informed decision-making between approaches. Use when choosing alternatives.
Forces adversarial reasoning on architectural choices, library selections, tool picks, and planning to expose blind spots and prevent premature commitment bias.
Share bugs, ideas, or general feedback.
AI resolve ambiguity silent. User never know decision made. Silent micro-assumption make code feel "off". Undo woven assumption cost more than upfront choice.
Most decision NOT ambiguous. AI decide when:
Surface decision only when ALL three true:
Confidence test: "Considered two+ approaches, neither clearly better given project context." True → surface. False → decide, move on.
Err side of deciding. Confident AI occasionally disputable > uncertain AI ask everything. Ask only when genuinely torn, consequences matter.
When surface judgment call:
Decision needed: [one-line description of what's being decided]
- Option A: [approach] — [1-line pro], [1-line con]
- Option B: [approach] — [1-line pro], [1-line con]
I lean toward [option] because [one sentence of reasoning].
Two options norm. Three maximum. No essays.
Not interrupt every judgment call. Collect, surface at natural checkpoints:
Escalation signal: Single component produce >3 judgment calls, project need clearer standards. Suggest run relevant refiner instead ask each individually.
When user resolve judgment call:
framework:context-anchoring (per-feature) or recommend run relevant refiner (project-wide).Protocol become less active as project mature:
Well-configured project see almost no judgment calls. If AI still ask frequently after multiple features, standards documents need improvement. Example: aggregate boundary questions keep surface, DDD defaults document may not define sizing heuristic -- run domain-driven-design refiner capture team preference, eliminate question permanently.