Verify claims in a research report against cited sources using cogni-claims. Works on cogni-research project directories (auto-detects draft, sources, and existing claims) or on any standalone markdown report with inline citations. Use when the user asks to "verify report", "verify claims", "check sources", "fact-check the report", "verify the research", "run claims verification", "check the citations", or wants to re-verify a report after editing. Also use when the user says "verify" after a research-report run completes, or when research-report's Phase 6 summary recommends running verify-report.
From cogni-researchnpx claudepluginhub cogni-work/insight-wave --plugin cogni-researchThis skill is limited to using the following tools:
references/claims-integration.mdreferences/review-criteria.mdreferences/standalone-mode.mdExecutes pre-written implementation plans: critically reviews, follows bite-sized steps exactly, runs verifications, tracks progress with checkpoints, uses git worktrees, stops on blockers.
Guides idea refinement into designs: explores context, asks questions one-by-one, proposes approaches, presents sections for approval, writes/review specs before coding.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Claims verification is the quality gate that separates evidence-backed research from plausible-sounding prose. This skill runs in a fresh context window — separate from the research pipeline — so that claims extraction, source verification, and the review loop get the full attention they deserve without competing for context with research data.
When this skill loads:
output/draft-v1.md or output/report.mdA project directory contains entity directories (00-sub-questions/, 01-contexts/, etc.) and .metadata/project-config.json. This skill loads only the draft and source entities — not sub-questions or contexts — keeping the context window lean for verification work.
.metadata/project-config.json.metadata/project-config.json.metadata/project-config.json for report type, topic, language.metadata/execution-log.json for phase completion stateoutput/draft-v{N}.md (latest draft, even if not finalized)output/report.md (finalized report, for re-verification after user edits)02-sources/data/ to build source lookup (URL → entity). This is the only research data loaded — sub-questions and contexts are NOT loadedWhen the user provides a path to a markdown file outside a cogni-research project:
.verify-report/{slug}/ as a sibling to the markdown file03-report-claims/data/, .metadata/, cogni-claims/standalone_mode = true in .metadata/project-config.jsonRead: references/standalone-mode.md for citation detection patterns and workspace layout.
If previous verification artifacts exist, present the user with options rather than silently re-running or silently resuming:
No prior verification (no 03-report-claims/data/ content, no cogni-claims/claims.json):
→ Proceed to Phase 1.
Claims extracted but not submitted (03-report-claims/data/ has entities, cogni-claims/claims.json absent):
→ Resume from Phase 2 (submission).
Verification incomplete (cogni-claims/claims.json exists with status: pending claims):
→ Resume from Phase 2 mid-point (verification).
Previous verification complete (cogni-claims/claims.json exists with completed results):
Previous verification found
- Claims verified: N | Confirmed: N | Deviations: N
- Verified at: {timestamp}
Options:
- re-verify — clear previous results, re-extract claims from current draft, full verification
- inspect — show previous results without re-running (opens cogni-claims dashboard)
- continue — keep existing results, proceed to review loop
Handle the user's choice:
03-report-claims/data/ contents and cogni-claims/claims.json, proceed to Phase 1Skill(cogni-claims:claims, mode=dashboard, working_dir=<project_path>), then stopSpawn the claim-extractor agent to identify verifiable factual claims in the draft:
Task(claim-extractor,
PROJECT_PATH=<project_path>,
DRAFT_PATH=<resolved draft path>,
DRAFT_VERSION=<N>)
The claim-extractor creates report-claim entities in 03-report-claims/data/, each linking a factual statement to its cited source URL. It prioritizes statistical claims, attribution claims, causal claims, and definitional claims — in that order.
After extraction, report to the user:
Extracted N verifiable claims from the draft (N skipped — unsourced or general knowledge).
Read: references/claims-integration.md for the cogni-claims submission protocol.
Collect report-claim entities from 03-report-claims/data/ and submit as a batch:
Skill(cogni-claims:claims, mode=submit,
working_dir=<project_path>,
claims=[...extracted claims from report-claim entities...],
submitted_by="cogni-research/verify-report")
Skill(cogni-claims:claims, mode=verify,
working_dir=<project_path>)
cogni-claims dispatches claim-verifier agents (one per unique source URL) that fetch each source, compare claims against actual content, and detect 5 deviation types: misquotation, unsupported_conclusion, selective_omission, data_staleness, source_contradiction.
After verification, read cogni-claims/claims.json and update report-claim entities:
verification_status to verified/deviated/source_unavailabledeviation_type and deviation_severity if deviatedclaims_submission_id to the cogni-claims claim IDPresent verification results to the user before proceeding to automated review. This ensures the user has visibility into what was verified and can steer corrections.
{PROJECT_PATH}/cogni-claims/claims.json for verification resultsAskUserQuestion:Claims Verification Results
Verified: N | Confirmed: N | Deviations: N | Sources unavailable: N
Deviations found:
- [claim statement] — [deviation_type] ([severity]): [explanation]
- ...
Options:
- proceed — pass deviations to reviewer + revisor for automated correction
- fix: 1, 3 — flag specific claims for mandatory correction
- drop: 2 — remove specific claims from the report entirely
- accept — mark report as verified, finalize without revision
- inspect: 2 — open cogni-claims inspect mode for claim 2 (detailed source comparison in browser)
How would you like to proceed?
proceed → continue to Phase 4 with all deviations as reviewer inputfix: N, M → add flagged claims to a mandatory-fix list passed to the reviewerdrop: N → add to a drop list; revisor will remove these claims from the reportaccept → skip Phase 4, proceed directly to Phase 5 (finalization)inspect: N → Skill(cogni-claims:claims, mode=inspect, claim_id=<id>), then re-present options.metadata/user-claims-review.json:{
"reviewed_at": "<ISO timestamp>",
"total_claims": 25,
"confirmed": 20,
"deviated": 4,
"source_unavailable": 1,
"user_action": "proceed|fix|drop|accept",
"fix_claims": ["claim-id-1", "claim-id-3"],
"drop_claims": ["claim-id-2"]
}
This phase runs only if the user chose proceed or fix in Phase 3. The reviewer receives the draft + claims data + user decisions — no research data, keeping context focused.
Read: references/review-criteria.md for the scoring rubric (shared with research-report).
Maximum 3 iterations. Each iteration:
Task(reviewer,
PROJECT_PATH=<project_path>,
DRAFT_PATH=<current draft path>,
CLAIMS_DASHBOARD=<project_path>/cogni-claims/claims.json,
USER_CLAIMS_REVIEW=<project_path>/.metadata/user-claims-review.json,
REVIEW_ITERATION=N,
OUTPUT_LANGUAGE=<output_language>)
The reviewer scores the draft on 5 structural dimensions (completeness, coherence, source diversity, depth, clarity) and multiplies by the claims verification rate. It flags high/critical deviations as mandatory fixes and applies user override decisions.
Task(revisor,
PROJECT_PATH=<project_path>,
DRAFT_PATH=<current draft path>,
VERDICT_PATH=".metadata/review-verdicts/v{N}.json",
NEW_DRAFT_VERSION=<N+1>,
OUTPUT_LANGUAGE=<output_language>,
MARKET=<market>)
After revision:
03-report-claims/data/ (revised claims may differ)When verdict="accept" or iteration reaches 3: proceed to Phase 5.
output/report.md (overwrite if exists).metadata/execution-log.json:
phase_5_review.claims_verification: actual verification stats (not "deferred" or "skipped")phase_5_review.verification_rate: N.NNphase_5_review.review_iterations: Nphase_5_review.final_score: N.NNphase_5_review.verified_by: "verify-report"Verification Complete
- Claims: N extracted, N confirmed, N deviated (N fixed), N sources unavailable
- Verification rate: X.XX
- Review iterations: N, final score: X.XX
- Verified report:
output/report.md
| Scenario | Recovery |
|---|---|
| cogni-claims not installed | Cannot proceed — this skill requires cogni-claims. Tell the user to install cogni-claims plugin |
| All source URLs unreachable | Report results with source_unavailable count. Suggest user run /claims cobrowse for interactive recovery, or check URLs manually |
| Claim extraction produces 0 claims | The draft may lack inline citations. Suggest re-running research-report writer with citation requirements |
| Review loop reaches max (3) | Accept current draft with quality warning |
| Project directory not found | Ask user for explicit path |
| Reference | Read When |
|---|---|
references/claims-integration.md | Phase 2 — cogni-claims submission + verification protocol |
references/standalone-mode.md | Phase 0 Mode B — standalone markdown verification |
references/review-criteria.md | Phase 4 — understanding review scoring (shared with research-report) |
Note: references/review-criteria.md is a symlink to ../research-report/references/review-criteria.md — both skills share the same scoring rubric.