From empire-research
Runs systematic verification pass on generated content to catch hallucinations, confabulations, and unsupported assertions. Use separately after generation or research when accuracy matters.
npx claudepluginhub marcoskichel/empire --plugin empire-researchThis skill uses the workspace's default tool permissions.
<section id="separate-pass-requirement">
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Explores codebases via GitNexus: discover repos, query execution flows, trace processes, inspect symbol callers/callees, and review architecture.
Share bugs, ideas, or general feedback.
Verification MUST be a separate pass from generation. LLMs in generation mode cannot reliably catch their own hallucinations:
MUST complete generation first. THEN run a verification pass with adversarial stance, external grounding, and fresh attention on each claim.
| State | Symptoms | Risk | Intervention |
|---|---|---|---|
| F1: No Verification Pass | Content generated and delivered without fact-checking | Hallucinations pass through undetected | Run verification pass before delivery |
| F2: Self-Verification | Same pass asked to "check your facts" while generating | False confidence — errors confirmed by same process that created them | Complete generation first, then separate pass |
| F3: Memory-Based Verification | Claims checked against "what I know" without external sources | Hallucinations verified by hallucinated knowledge | Require explicit source citation per claim |
| F4: Selective Verification | Only some claims checked; others assumed correct | Unchecked claims may contain errors | Systematic extraction of ALL verifiable claims |
| F5: Verification Complete | All claims extracted, each checked against sources, confidence levels assigned | — | Deliver with verification status |
Extract every verifiable statement from the content.
Types to extract:
Skip:
Categorize by verifiability:
| Category | Description | Strategy |
|---|---|---|
| Verifiable-Hard | Numbers, dates, names, quotes | Must match source exactly |
| Verifiable-Soft | General facts, processes, mechanisms | Source should substantially support |
| Attribution | "X said...", "According to..." | Verify source exists and said something similar |
| Inference | Conclusions drawn from evidence | Verify premises, assess reasoning |
| Opinion-as-Fact | Subjective claim stated as objective | Flag for rewording or qualification |
Phase 1: Extract — list all verifiable claims from the content.
Phase 2: Check — for each claim, attempt source verification.
### Claim: "[exact claim text]"
- Category: [Verifiable-Hard/Soft/Attribution/Inference]
- Source checked: [specific source]
- Finding: [Confirmed/Partially supported/Not found/Contradicted]
- Confidence: [High/Medium/Low]
- Notes: [discrepancies, qualifications needed]
Verification outcomes:
| Outcome | Action |
|---|---|
| Confirmed | Keep, cite source |
| Partially supported | Qualify or narrow claim |
| Not found | Mark unverified, consider removing |
| Contradicted | Remove or correct |
| Outdated | Update or add recency caveat |
Phase 3: Assign confidence:
| Level | Criteria |
|---|---|
| High | All key claims verified; no contradictions found |
| Medium | Most claims verified; some unverified but plausible |
| Low | Significant claims unverified; some corrections needed |
| Unreliable | Multiple contradictions found; major revision needed |
Watch for these specific hallucination types:
Plausible Fabrication — specific details that sound right but don't exist. Examples: fake paper citations, non-existent statistics, invented quotes. Detection: verify specific claims against primary sources.
Confident Extrapolation — reasonable inference stated as established fact. Examples: "Studies show..." (no specific study), "Experts agree..." (no citation). Detection: require specific source for any claim of external support.
Temporal Confusion — mixing information from different time periods. Examples: old statistics presented as current, defunct organizations described as active. Detection: check dates on sources.
Attribution Drift — correct information attributed to wrong source. Examples: quote assigned to wrong person, finding attributed to wrong study. Detection: verify attribution specifically, not just content.
Amalgamation — combining details from multiple sources into one fictional source. Examples: invented study combining real findings from separate papers. Detection: verify the specific source exists and contains all attributed claims.
Precision Inflation — adding false precision to vague knowledge. Examples: "approximately 47.3%" when only "about half" is supported. Detection: check if source actually provides that level of precision.
Deliver fact-checked content with verification status:
## [Content Title]
[Content body]
---
### Verification Status
**Overall Confidence:** [High/Medium/Low]
**Verified Claims:**
- [Claim] — Source: [citation]
**Unverified Claims:**
- [Claim] — No source found; treat as uncertain
**Corrections Made:**
- [Original claim] → [Corrected claim] (Source: [citation])
**Caveats:**
- [Limitations or qualifications]
| Pattern | Problem | Fix |
|---|---|---|
| "I'm confident" | Confidence ≠ accuracy | Require source citation |
| "To the best of my knowledge" | Memory is unreliable | Check external source |
| "Research shows" | Which research? | Cite specific source |
| Verify-while-generating | Same pass can't catch own errors | Separate passes mandatory |
| Check one, assume rest | Partial verification | Check all or mark unchecked |