Use when someone says "verify context file", "check if AGENTS.md is correct", "validate context accuracy", "audit CLAUDE.md", "check documentation truth", "are the docs accurate", or mentions "context verification" or "documentation audit". Treats all existing content as potentially incorrect, extracts every technical claim, and verifies each against the actual codebase and web sources.
From contextnpx claudepluginhub masseater/claude-code-plugin --plugin contextThis skill uses the workspace's default tool permissions.
references/claim-extraction-patterns.mdGuides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
Treat context files as entirely untrustworthy. Extract every technical claim, verify each against the actual codebase and external sources, and silently fix anything that is wrong.
$ARGUMENTS
Every statement in the file might be wrong. Prove it correct with evidence, or flag it as incorrect.
Never trust a claim simply because it "sounds reasonable" or "matches training data". Treat internal knowledge as equally untrustworthy — only evidence from the actual codebase and authoritative web sources counts.
Parent Process
+-- Determine target files (single file or project scan)
+-- For each file:
| +-- Read file & split into sections
| +-- Extract claims per section
| +-- 8 subagents in parallel per batch
| | +-- Agent N: Verify claims in section N
| +-- Auto-apply all fixes
+-- Done
If $ARGUMENTS specifies a file path:
.mdIf $ARGUMENTS is empty (project-wide mode):
**/AGENTS.md, .claude/rules/**/*.md, **/CLAUDE.md# inside code blocksBefore launching subagents, extract concrete, verifiable claims from each section. A "claim" is any statement that can be checked against reality.
| Category | Example | How to Verify |
|---|---|---|
| File/directory existence | "plugins/ contains plugin code" | Glob for the path |
| Command behavior | "bun run check runs lint" | Read package.json scripts |
| Tool/version reference | "Uses Biome for linting" | Check config files, package.json |
| Configuration | "Hooks are in hooks.json" | Glob/Read the file |
| Workflow description | "Pre-commit runs security check" | Read lefthook.yml or equivalent |
| Cross-file reference | "CLAUDE.md references AGENTS.md" | Read the referenced file |
| External tool/API | "tsgo is native TypeScript 7.x" | WebSearch for current status |
| Structural claim | "5 skills in context plugin" | Count actual skill directories |
Launch up to 8 subagents simultaneously. For 9+ sections, batch by 8.
Each subagent receives the following prompt:
You are a fact-checker for technical documentation. Verify whether claims are TRUE or FALSE.
IMPORTANT: Do NOT trust your own knowledge. Only evidence from the actual codebase (Grep/Glob/Read) and web searches (WebSearch) counts. If you cannot find evidence, the claim is UNVERIFIED, not "probably true".
## Target Section
- **File**: {file-path}
- **Section**: {number}/{total} — {heading}
- **Lines**: {start}-{end}
## Section Content
{content}
## Claims to Verify
{numbered list of claims with verification methods}
## Verification Protocol
For EACH claim:
1. **Search the codebase**: Use Grep/Glob/Read to find evidence
2. **Search the web** (if claim involves external tools/versions): Use WebSearch
3. **Record evidence**: Exact file paths, line numbers, or URLs found
4. **Determine verdict**:
- VERIFIED: Evidence confirms the claim
- FALSE: Evidence contradicts the claim
- OUTDATED: Was true but no longer accurate
- UNVERIFIED: Cannot find evidence either way
## Output Format
For each claim:
**Claim {N}**: "{claim text}"
- **Verdict**: {VERIFIED|FALSE|OUTDATED|UNVERIFIED}
- **Evidence**: {file paths, line numbers, or URLs}
- **Correction** (if FALSE/OUTDATED): {correct statement}
If any claims are FALSE or OUTDATED, provide:
**Proposed Fix**:
{Updated section content with corrections applied}
**Changes**:
- {change 1}
- {change 2}
Agent tool settings:
subagent_type: "general-purpose"description: "Verify section {number}: {heading}"Automatically apply all corrections. No user confirmation. No reporting.
@filepath external references: Verify the referenced file exists but do not expand# inside code blocks: Not headings