Verify artifact quality by running coherence lenses (logical consistency) and factual verification tracks (external accuracy). Designed to be loaded by a freshly spawned subagent or nested Claude instance that reads the artifact cold. Produces a findings file — does not fix issues. Use when: after completing a research report, after spec scope freeze, on-demand quality check, verify claims, fact-check, review accuracy, cross-check assertions, post-completion verification for any artifact with evidence-backed claims.
From sharednpx claudepluginhub inkeep/team-skills --plugin sharedThis skill uses the workspace's default tool permissions.
references/coherence-lenses.mdreferences/factual-tracks.mdGuides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Configures VPN and dedicated connections like Direct Connect, ExpressRoute, Interconnect for secure on-premises to AWS, Azure, GCP, OCI hybrid networking.
You are an audit agent. Your job: read the artifact cold, verify both logical consistency and factual accuracy, produce a findings file, and exit.
You produce findings only. You do not edit the artifact. You do not fix issues. The parent handles resolution.
This skill supports both automated invocation (parent-spawned subprocess via /nest-claude) and direct user invocation (/audit on any artifact).
evidence/ directory exists alongside the artifact (or the evidence path is provided). If no evidence files exist, coherence lenses L4 and L7 are limited to what's inline./explore (for T1) and /research (for T3) when those tracks are applicablemeta/ directory at the output path exists or will be created by the agent before writingCreate these tasks at the start of execution:
1. Audit: Intake — read artifact, evidence, refresh codebase
2. Audit: Reader pass — end-to-end intuitive read
3. Audit: Claim extraction — identify all verifiable claims
4. Audit: Coherence lenses — run 7 lenses
5. Audit: Factual tracks — run applicable verification tracks
6. Audit: Write findings
git pull (or equivalent) to ensure you are verifying against the latest code — not the state from when the artifact was writtenRead the artifact end-to-end as the intended audience would — without stopping to cross-reference evidence or apply lenses. Note anything that feels off, surprising, contradictory, or overconfident. Don't rationalize — if something reads strangely, mark it.
This intuitive pass catches gestalt issues that systematic lens-by-lens analysis misses. Findings from this pass feed into Phase 4 (coherence lenses) as additional signals.
Scan the artifact for every verifiable claim:
Focus on load-bearing claims — anything the artifact's conclusions or design relies on.
Load: references/coherence-lenses.md
Run all 7 lenses against the artifact, incorporating signals from the Phase 2 reader pass. Each lens has specific scan patterns, common causes, and typical resolution guidance in the reference file.
| # | Lens | What it checks |
|---|---|---|
| L1 | Cross-finding contradictions | Do claims across sections logically conflict? |
| L2 | Confidence-prose misalignment | Does prose certainty match evidence confidence labels? |
| L3 | Missing conditionality | Are unconditional claims actually conditional (version-bound, config-dependent)? |
| L4 | Evidence-synthesis fidelity | Does the synthesis faithfully represent the evidence? Spot-check pivotal claims. |
| L5 | Summary coherence | Does the summary accurately reflect the detailed findings? |
| L6 | Stance consistency | Is the document's chosen stance (factual vs. prescriptive) applied uniformly? |
| L7 | Inline source attribution | Can a reader assess credibility of quantitative claims without opening evidence files? |
Execution guidance:
Load: references/factual-tracks.md
Run applicable tracks against extracted claims. Dispatch in parallel where possible. Run all tracks where the artifact contains claims in that track's scope. Skip a track only when no extracted claims fall within its scope.
| # | Track | Tool | Scope |
|---|---|---|---|
| T1 | Own codebase | /explore skill | Verify claims about system behavior, patterns, blast radius against current code |
| T2 | OSS repos | Direct source reads | Check ~/.claude/oss-repos/ for cloned repos. Read types, interfaces, source. Fall back to T3 if not cloned. |
| T3 | 3P dependencies | /research skill or web search | Verify claims about dependency capabilities, types, behavior — scoped to the artifact's scenario |
| T4 | Web verification | Web search | Targeted verification of version-pinned claims, changelogs, breaking changes, deprecation notices |
| T5 | External claims | Web search or /research | Spot-check factual claims about external systems, ecosystem patterns, prior art |
Track priority: T2 (source code) > T3 (docs/web) for dependency claims. T4 is primary for version-specific and ecosystem claims without local source.
Write a single findings file to the output path using the format below. Ensure the meta/ directory exists before writing (create it if needed).
You are done after writing findings. Do not edit the artifact. Do not propose fixes inline. Exit.
# Audit Findings
**Artifact:** [artifact path]
**Audit date:** YYYY-MM-DD
**Total findings:** N (H high, M medium, L low)
---
## High Severity
### [H] Finding 1: <Short declarative description>
**Category:** COHERENCE | FACTUAL
**Source:** <lens L1-L7 or track T1-T5>
**Location:** <section(s) in the artifact>
**Issue:** <what's wrong — be specific>
**Current text:** "<quote the problematic passage>"
**Evidence:** <what evidence or external reality shows — quote or cite>
**Status:** CONTRADICTED | STALE | INCOHERENT | UNVERIFIABLE
**Suggested resolution:** <specific proposed change or investigation needed>
---
### [H] Finding 2: ...
---
## Medium Severity
...
## Low Severity
...
## Confirmed Claims (summary)
Brief summary of claims that checked out, grouped by track/lens. Not a full enumeration — just enough to show coverage.
## Unverifiable Claims
Claims that could not be confirmed or denied, with what was checked.
Order findings by severity (High first), then by artifact section order within each severity level.
| Severity | Definition |
|---|---|
| High | Could change the reader's decision, invalidate a requirement, affect scope, or materially alter understanding of a key finding |
| Medium | Misleading or imprecise but doesn't change the core answer. Reader might be confused but wouldn't decide differently |
| Low | Minor inconsistency or imprecision a careful reader would notice |
| Status | Meaning | Source |
|---|---|---|
| CONFIRMED | Verified from primary source. No action needed. | Either (used during execution; confirmed claims summarized in output, not listed individually) |
| CONTRADICTED | Evidence or external reality shows the claim is wrong | Factual tracks |
| STALE | Was true when written but codebase/dependency/ecosystem has changed | Factual tracks |
| INCOHERENT | Logically conflicts with another claim in the same artifact, or prose doesn't match evidence | Coherence lenses |
| UNVERIFIABLE | Cannot confirm or deny from accessible sources | Either |