Verifies factual accuracy of documents against codebase and git history: extracts claims, checks sources, corrects inaccuracies in place, adds summary. Targets reports/plans; auto-detects recent HTML or takes path.
From vision-powersnpx claudepluginhub leejuoh/claude-code-zero --plugin vision-powersThis skill is limited to using the following tools:
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Implements structured self-debugging workflow for AI agent failures: capture errors, diagnose patterns like loops or context overflow, apply contained recoveries, and generate introspection reports.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Verify the factual accuracy of a document against the actual codebase and git history. Extracts verifiable claims, checks each against source, corrects inaccuracies in place, and adds a verification summary.
This is not a re-review. It does not second-guess analysis, opinions, or design judgments. It does not change the document's structure or organization. It is a fact-checker — it verifies that the data presented matches reality, corrects what doesn't, and leaves everything else alone.
Determine what to verify from $1:
.html, .md, or any text document)
ls -t ${CLAUDE_PLUGIN_DATA}/reports/*.html | head -1
If no reports found, inform the user and stop.Document type detection — auto-detect from page content to adjust verification strategy:
| Document Type | Detection | Verification Focus |
|---|---|---|
| diff-visual report | Contains "Diff Visual" in title/heading | Verify against the git ref the review was based on |
| plan-visual report | Contains "Plan Visual" in title/heading | Verify file references, names, architecture claims |
| project-recap report | Contains "Project Recap" in title/heading | Re-run git commands, verify activity narrative |
| agent-extension-visual report | Contains plugin analysis markers | Verify plugin structure, file paths, feature descriptions |
| Markdown document | .md extension | Verify file references, function/type names, behavior descriptions |
| Other | Fallback | Extract and verify whatever factual claims about code it contains |
Determine the output language for the verification summary:
--lang <code> (e.g., --lang ko, --lang fr, --lang zh) → use that language. Any language code is validAfter determining the target file, check for a companion feedback.json:
--feedback path/to/feedback.json~/Downloads/feedback.json (macOS default download location) — verify report_path matches the target file. If multiple feedback*.json exist (e.g., feedback (1).json), use the most recent one.When feedback.json is present, adjust verification strategy:
In the Phase 5 Report, include feedback-driven summary:
Feedback-guided verification:
{N} sections flagged by user
{N} issues confirmed and corrected
{N} issues not reproduced (user concern was unfounded)
Why: Systematic extraction prevents cherry-picking. Every verifiable claim must be identified before verification begins.
Read the target file. Extract every verifiable factual claim into 5 categories:
Skip subjective analysis: opinions, design judgments, readability assessments, severity ratings, recommendations. These aren't verifiable facts.
Why: Each claim category requires a different verification method. Using the wrong method (e.g., Grep for quantitative claims) produces false confirmations.
For each extracted claim, go to the actual source:
Naming claims — Glob + Read:
Quantitative claims — Bash git commands:
git diff --stat, git log, git diff --name-status and compare output against the document's numberswc -lBehavioral claims — Read source files:
git show <ref>:file) and working tree version to verify before/after claimsStructural claims — Grep + Read:
Temporal claims — Git commands:
git log commands to verify activity narrativeClassify each claim:
Why: Surgical corrections preserve the document's structure and style. Over-editing risks breaking HTML layout or changing the author's voice.
Use the Edit tool for surgical corrections:
Do correct:
Do NOT change:
If a section contains a factual error, fix only the factual part. If a section is fundamentally wrong (not just a detail error), rewrite that section's content while preserving the surrounding HTML/markdown structure.
Why: Transparency — readers can see what was checked, what changed, and what couldn't be verified.
Insert a verification summary into the document.
For HTML files — insert a verification section matching the page's existing design:
<section id="verification-summary" class="ve-card" style="--i: {next-index}">
<h2>Verification Summary</h2>
<div class="kpi-grid">
<div class="kpi-card kpi-card--info">
<span class="kpi-value">{total}</span>
<span class="kpi-label">Claims Checked</span>
</div>
<div class="kpi-card kpi-card--success">
<span class="kpi-value">{confirmed}</span>
<span class="kpi-label">Confirmed</span>
</div>
<div class="kpi-card kpi-card--danger">
<span class="kpi-value">{corrected}</span>
<span class="kpi-label">Corrected</span>
</div>
<div class="kpi-card kpi-card--warning">
<span class="kpi-value">{unverifiable}</span>
<span class="kpi-label">Unverifiable</span>
</div>
</div>
<details>
<summary>Corrections Made</summary>
<ul>
<li>{description of each correction with file:line reference}</li>
</ul>
</details>
<details>
<summary>Unverifiable Claims</summary>
<ul>
<li>{claim that could not be verified and why}</li>
</ul>
</details>
</section>
Place the verification section as the last content section, before </main> or the closing layout wrapper.
For Markdown files — append at the end:
## Verification Summary
| Metric | Count |
|--------|-------|
| Claims Checked | {total} |
| Confirmed | {confirmed} |
| Corrected | {corrected} |
| Unverifiable | {unverifiable} |
### Corrections Made
- {description of each correction}
### Unverifiable Claims
- {claim and reason}
<section> tags.~/Downloads/feedback.json which may be from a completely different report. Always verify the report_path field matches the target file before using feedback data.validateAuth() but the actual function is verifyAuth(), that's a correction. But don't change diagram layout or styling.Output a summary to the user:
Fact-check complete: {file path}
{total} claims checked
{confirmed} confirmed
{corrected} corrected
{unverifiable} unverifiable
{If corrections were made, list the top 3-5 most significant corrections}
{If nothing needed correction, note that verification confirms accuracy}