Verify claims in generated output against sources. Use as a separate pass AFTER content generation to catch hallucinations. Critical constraint - cannot be reliably combined with generation in a single pass.
npx claudepluginhub joshuarweaver/cascade-content-creation-misc-1 --plugin jwynia-agent-skills-1This skill uses the workspace's default tool permissions.
Systematic verification of claims in generated content. Designed to catch hallucinations, confabulations, and unsupported assertions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Systematic verification of claims in generated content. Designed to catch hallucinations, confabulations, and unsupported assertions.
The Fundamental Problem: LLMs generate plausible-sounding content by predicting what should come next. This same mechanism produces hallucinations—confident statements that feel true but aren't. An LLM in generation mode cannot reliably catch its own hallucinations because:
The Solution: Verification must be a separate cognitive pass with:
Symptoms: Content generated and delivered without any fact-checking. Risk: Hallucinations pass through undetected. Intervention: Run verification pass before delivery. Extract claims, check each against sources.
Symptoms: Same pass asked to "check your facts" while generating. Risk: False confidence—errors confirmed by same process that created them. Intervention: Complete generation first, then run separate verification pass with explicit source requirements.
Symptoms: Claims checked against "what I know" without external sources. Risk: Hallucinations verified by hallucinated knowledge. Intervention: Require explicit source citation for each verified claim. If no source available, mark as unverified.
Symptoms: Only some claims checked; others assumed correct. Risk: Unchecked claims may contain errors. Intervention: Systematic extraction of ALL verifiable claims. Check each, or explicitly mark unchecked items.
Symptoms: All claims extracted, each checked against sources, confidence levels assigned. Indicators: Source citations present, unverified claims marked, confidence explicit.
Extract every verifiable statement from the content.
Claim types to extract:
What to skip:
Categorize each claim by verifiability:
| Category | Description | Verification Strategy |
|---|---|---|
| Verifiable-Hard | Numbers, dates, names, quotes | Must match source exactly |
| Verifiable-Soft | General facts, processes, mechanisms | Source should substantially support |
| Attribution | "X said...", "According to..." | Verify source exists and said something similar |
| Inference | Conclusions drawn from evidence | Verify premises, assess reasoning |
| Opinion-as-Fact | Subjective claim stated as objective | Flag for rewording or qualification |
For each claim, attempt verification:
## Claim Verification Log
### Claim 1: "[exact claim text]"
- **Category:** [Verifiable-Hard/Soft/Attribution/Inference]
- **Source checked:** [specific source]
- **Finding:** [Confirmed/Partially supported/Not found/Contradicted]
- **Confidence:** [High/Medium/Low]
- **Notes:** [discrepancies, qualifications needed]
### Claim 2: ...
Verification outcomes:
| Outcome | Meaning | Action |
|---|---|---|
| Confirmed | Source explicitly supports claim | Keep, cite source |
| Partially supported | Source supports part, not all | Qualify or narrow claim |
| Not found | No source located | Mark unverified, consider removing |
| Contradicted | Source says opposite | Remove or correct |
| Outdated | Source is dated; current state may differ | Update or add recency caveat |
Assign overall confidence to the content:
| Level | Criteria |
|---|---|
| High | All key claims verified; no contradictions found |
| Medium | Most claims verified; some unverified but plausible |
| Low | Significant claims unverified; some corrections needed |
| Unreliable | Multiple contradictions found; major revision needed |
Common hallucination types to watch for:
Pattern: Specific details that sound right but don't exist. Examples: Fake paper citations, non-existent statistics, invented quotes. Detection: Verify specific claims against primary sources.
Pattern: Reasonable inference stated as established fact. Examples: "Studies show..." (no specific study), "Experts agree..." (no citation). Detection: Require specific source for any claim of external support.
Pattern: Mixing information from different time periods. Examples: Old statistics presented as current, defunct organizations described as active. Detection: Check dates on sources, verify current status.
Pattern: Correct information attributed to wrong source. Examples: Quote assigned to wrong person, finding attributed to wrong study. Detection: Verify attribution specifically, not just content.
Pattern: Combining details from multiple sources into one fictional source. Examples: Invented study that combines real findings from separate papers. Detection: Verify the specific source exists and contains all attributed claims.
Pattern: Adding false precision to vague knowledge. Examples: "Approximately 47.3%" when only "about half" is supported. Detection: Check if source actually provides that level of precision.
Before releasing fact-checked content:
| Research Phase | Fact-Check Role |
|---|---|
| During research | Verify claims in sources themselves |
| After synthesis | Verify that synthesis accurately represents sources |
| Before delivery | Final pass to catch hallucinations in output |
Handoff pattern:
| Context | Verification Level |
|---|---|
| Published content | Full verification required |
| Decision support | Key claims must be verified |
| Educational content | High accuracy expected |
| Casual conversation | Light verification acceptable |
| Creative fiction | N/A (different standards) |
| Pattern | Problem | Fix |
|---|---|---|
| "I'm confident" | Confidence ≠ accuracy | Require source citation |
| "To the best of my knowledge" | Memory is unreliable | Check external source |
| "Generally speaking" | Vagueness hides uncertainty | Be specific or mark unverified |
| "Research shows" | Which research? | Cite specific source |
| Verify-while-generating | Same pass can't catch own errors | Separate passes mandatory |
| Check one, assume rest | Partial verification | Check all or mark unchecked |
When delivering fact-checked content:
## [Content Title]
[Content body with claims]
---
### Verification Status
**Overall Confidence:** [High/Medium/Low]
**Verified Claims:**
- [Claim 1] — Source: [citation]
- [Claim 2] — Source: [citation]
**Unverified Claims:**
- [Claim 3] — No source found; treat as uncertain
**Corrections Made:**
- [Original claim] → [Corrected claim] (Source: [citation])
**Caveats:**
- [Any limitations or qualifications]
This skill writes primary output to files so work persists across sessions.
Before doing any other work:
context/output-config.md in the projectexplorations/fact-check/ or a sensible location for this projectcontext/output-config.md if context network exists.fact-check-output.md at project root otherwiseFor this skill, persist:
| Goes to File | Stays in Conversation |
|---|---|
| Verification status report | Discussion of sources |
| Claim-by-claim results | Clarifying questions |
| Confidence assessment | Verification process |
| Corrections and caveats | Real-time feedback |
Pattern: {content-name}-factcheck-{date}.md
Example: research-synthesis-factcheck-2025-01-15.md
This skill extends the research cluster with post-generation verification. Distinct from research (which gathers information) and operates as quality control on output.
Related: skills/research/SKILL.md (pre-generation), references/doppelganger/ (truth hierarchies)