Meta-cognitive reasoning engine that decomposes complex problems using the DSVSR protocol (Decompose-Solve-Verify-Synthesize-Reflect) with explicit confidence calibration. [EXPLICIT] Use when the user asks to "break down this problem", "analyze with confidence scores", "decompose and verify", "run DSVSR", or "reason through this step by step". [EXPLICIT]
From jm-adknpx claudepluginhub javimontano/jm-adk-alfaThis skill is limited to using the following tools:
agents/guardian.mdagents/lead.mdagents/specialist.mdagents/support.mdevals/evals.jsonknowledge/body-of-knowledge.mdknowledge/knowledge-graph.mdprompts/meta.mdprompts/primary.mdprompts/variations/deep.mdprompts/variations/quick.mdreferences/domain-knowledge.mdtemplates/output.docx.mdtemplates/output.htmlSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Decompose, solve with confidence, verify, synthesize, and reflect — until the answer earns its confidence score. [EXPLICIT]
| Signal | Route |
|---|---|
| 2+ complexity indicators (multi-domain, high-stakes, ambiguous, 50+ words) | Full DSVSR |
| Simple factual or single-domain question | Fast path — answer directly with implicit confidence |
| User specifies confidence threshold | Full DSVSR with custom target |
Problem → DECOMPOSE → SOLVE → VERIFY → SYNTHESIZE → REFLECT → Answer + Metadata
↑ │
└──── if < target ──┘
Break the problem into independent, atomic sub-problems. [EXPLICIT]
Output format:
Sub-problems identified: [N]
├── SP-1: [description] (domain: [X], depends on: none)
├── SP-2: [description] (domain: [X], depends on: SP-1)
└── SP-3: [description] (domain: [X], depends on: none)
Trade-off: Over-decomposition creates unnecessary overhead; under-decomposition hides complexity. Target 3-7 sub-problems. If you have more than 7, group related ones.
Answer each sub-problem with explicit confidence scoring. [EXPLICIT]
| Score | Meaning | Evidence Required |
|---|---|---|
| 0.95-1.0 | Certain | Verifiable facts, direct evidence, mathematical proof |
| 0.85-0.94 | High | Strong evidence, expert consensus |
| 0.70-0.84 | Moderate | Reasonable inference, partial evidence |
| 0.50-0.69 | Low | Educated guess, significant assumptions |
| Below 0.50 | Speculative | Flag as hypothesis |
Per sub-problem:
SP-1: [Answer]
Confidence: [0.XX]
Justification: [Why this confidence level]
Would increase to [0.XX] if: [What additional info would help]
Rules:
Cross-check every sub-answer against four dimensions:
Bias signals to watch: Anchoring (first answer feels "obviously right"), confirmation (only supporting evidence found), availability (overweighting recent examples), authority (accepting a framework because it's popular, not because it fits).
After verification, update confidence scores. If scores don't change, verification was superficial. [EXPLICIT]
Combine verified sub-answers into a coherent response. [EXPLICIT]
Global = sum(sub_confidence * sub_importance) / sum(sub_importance). [EXPLICIT]Conflict resolution priority: Verified facts > Logical deductions > Expert consensus > Reasonable inference > Flagged speculation.
If global confidence < target (default 0.95):
Reflection questions: What is the single biggest weakness? If I'm wrong, what's the most likely cause? What would a disagreeing expert say? Is there a simpler explanation I'm overlooking?
Before producing any answer, scan:
Always include a completeness statement:
Sources reviewed: [thread messages and attachments consulted]
Information gaps: [what was NOT available that would improve confidence]
Skip DSVSR when the question has fewer than 2 complexity signals. Answer directly with clarity and relevant caveats. [EXPLICIT]
Complexity signals: Multi-domain, 50+ words, high ambiguity, high stakes, multiple dependencies.
Route specialized sub-problems to domain experts:
| Domain | Route |
|---|---|
| Mathematical/statistical | math-reasoning subagent |
| Code analysis/debugging | code-analysis subagent |
| Data analysis | data-analysis subagent |
| General knowledge | handle inline |
Each delegate returns: answer + confidence + evidence. Task-engine aggregates. [EXPLICIT]
Every substantial response includes:
---
REASONING METADATA:
- Global confidence: [0.XX]
- Sub-problem confidence: [SP-1: 0.XX, SP-2: 0.XX, ...]
- Sources consulted: [thread messages, attachments, knowledge]
- Weaknesses identified: [if any]
- Rigidity level: [exploratory | analytical | executive]
- Verification status: [all checks passed | N flags raised]
Rigidity levels:
[input-analyst] → [task-engine] → [excellence-loop] → User
Higher input quality from input-analyst raises baseline confidence, reducing DSVSR iterations. [EXPLICIT]
Before delivering a DSVSR response, confirm:
references/dsvsr-protocol.md — Worked examples per stagereferences/confidence-calibration.md — Calibration methodology and common mis-calibration patternsreferences/complexity-heuristics.md — When to activate full DSVSR vs fast pathreferences/recursion-protocol.md — Thread and attachment scanning protocolreferences/agent-delegation-patterns.md — Subagent routing and confidence aggregationAuthor: Javier Montaño | Last updated: 2026-03-18