From reason
Orchestrates hypothesis-driven reasoning through the ADI cycle (Abduction-Deduction-Induction): generates competing hypotheses, verifies them logically, gathers empirical evidence, and produces a Design Rationale Record with trust scores. Use when facing architectural decisions, technology choices, or any question where "why this and not that" matters. Don't use for quick questions, implementation, expert debates (use /arena), or structured thinking without evidence requirements (use /think).
npx claudepluginhub izzzzzi/izteam --plugin reasonThis skill is limited to using the following tools:
The **Reasoning Lead** guides a decision through the full ADI cycle: Abduction (hypothesize), Deduction (verify), Induction (validate). The result is a **Design Rationale Record (DRR)** — an auditable artifact explaining why this decision was made.
Generates and compares 3-5 competing hypotheses for complex decisions using ADI cycle (Abduction, Deduction, Induction). For architectural choices, trade-offs, and systematic option evaluation.
Builds weighted decision matrices, analyzes trade-offs, and generates ADRs for architectural, technical, and process decisions like database selection or framework choice.
Exposes Claude's reasoning as auditable traces with atomic claims, assumption ratings, weakest links, confidence decomposition, and falsification conditions. Triggers on 'reasoning', 'why', 'trace'.
Share bugs, ideas, or general feedback.
The Reasoning Lead guides a decision through the full ADI cycle: Abduction (hypothesize), Deduction (verify), Induction (validate). The result is a Design Rationale Record (DRR) — an auditable artifact explaining why this decision was made.
| Argument | Required | Description |
|---|---|---|
<question> | yes | The decision question to reason through |
Determine:
## Decision Question
> [Restated question — precise and scoped]
| Aspect | Details |
|--------|---------|
| Bounded Context | [system boundary] |
| Decision Type | [type] |
| Key Constraints | [list] |
| Stakes | [consequences of wrong choice] |
Launching hypothesis generation...
"Generating competing hypotheses... 🔬"
Launch 3-5 reason:hypothesizer agents IN PARALLEL in a single message. Each generates ONE hypothesis from a different angle.
Task(
subagent_type="reason:hypothesizer",
prompt="## Decision Question
[Full question with context]
## Bounded Context
[System boundary and constraints]
## Your Angle: [specific perspective]
Generate ONE hypothesis for solving this decision.
Study the project first to ground your hypothesis in reality."
)
Angle assignment strategy:
flowchart TD
Q{"Decision type?"}
Q -->|Tech choice| TECH["Angles: performance, DX, ecosystem,<br/>maintenance cost, migration risk"]
Q -->|Architecture| ARCH["Angles: simplicity, scalability,<br/>team familiarity, operational cost, flexibility"]
Q -->|Trade-off| TRADE["Angles: short-term win, long-term win,<br/>risk-minimizing, innovation, pragmatic"]
Q -->|Process| PROC["Angles: speed, quality,<br/>team autonomy, compliance, simplicity"]
Compile all hypotheses into a numbered list. Each starts at Assurance Level L0 (Observation).
Present to user:
## Hypotheses (all L0 — unverified)
| # | Hypothesis | Angle | Key Claim |
|---|-----------|-------|-----------|
| H1 | [name] | [angle] | [core claim] |
| H2 | [name] | [angle] | [core claim] |
| H3 | [name] | [angle] | [core claim] |
...
Moving to logical verification...
"Verifying hypotheses logically... 🔍"
Launch reason:verifier agents IN PARALLEL — one per hypothesis.
Task(
subagent_type="reason:verifier",
prompt="## Hypothesis to Verify
[Full hypothesis description]
## Bounded Context
[System boundary and constraints]
## All Hypotheses (for comparison)
[List of all hypotheses — so verifier can check for contradictions]
Verify this hypothesis LOGICALLY. Check internal consistency,
compatibility with constraints, and known failure modes.
Do NOT gather empirical evidence — only reason about it."
)
Update assurance levels:
## Verification Results
| # | Hypothesis | Verdict | Level | Issues |
|---|-----------|---------|-------|--------|
| H1 | [name] | PASS | L1 | — |
| H2 | [name] | FAIL | Invalid | [reason] |
| H3 | [name] | PARTIAL | L0 | [concerns] |
...
If ALL hypotheses are Invalid — return to Phase 1 with adjusted angles. Maximum 1 retry.
"Gathering empirical evidence... 📊"
Launch reason:evidence-gatherer agents IN PARALLEL — one per surviving hypothesis (L0 or L1).
Task(
subagent_type="reason:evidence-gatherer",
prompt="## Hypothesis to Validate
[Full hypothesis]
## Bounded Context
[System boundary]
Gather EMPIRICAL evidence: benchmarks, case studies, production reports,
documentation, community feedback. Rate each piece of evidence."
)
For each piece of evidence, record:
Trust formula (simplified WLNK):
Trust(hypothesis) = min(evidence_scores)
evidence_score = R × (1 - congruence_penalty)
congruence_penalty = max(0, 0.5 - CL) — only applies when CL < 0.5
Update assurance levels:
## Final Ranking
| Rank | Hypothesis | Level | Trust | Key Evidence |
|------|-----------|-------|-------|-------------|
| 1 | [name] | L2 | 0.85 | [strongest source] |
| 2 | [name] | L1 | 0.62 | [source] |
| 3 | [name] | Invalid | — | [why failed] |
Read references/drr-template.md and fill in the template.
Save to docs/decisions/YYYY-MM-DD-[topic-slug]-drr.md
"Reasoning complete. DRR saved to
docs/decisions/.... Winner: [hypothesis name] (L[level], trust: [score]). Review the record — the human makes the final call."
| Situation | Action |
|---|---|
| Hypothesizer fails | Proceed with N-1 hypotheses. Minimum 2 required. |
| All hypotheses invalid after verification | Retry Phase 1 once with broader angles. If still all invalid, report deadlock. |
| Evidence gatherer fails | Mark hypothesis evidence as "incomplete". Do not auto-promote to L2. |
| No hypothesis reaches L1+ | Report all as L0 with available evidence. Flag low confidence in DRR. |
docs/decisions/ does not exist | Create it before saving. |
| Save fails | Output the DRR directly to user. |
flowchart TD, never ASCII art