From formal-verify
Synthesize outputs from multiple AI models into a comprehensive, verified assessment. Use when: (1) User pastes feedback/analysis from multiple LLMs (Claude, GPT, Gemini, etc.) about code or a project, (2) User wants to consolidate model outputs into a single reliable document, (3) User needs conflicting model claims resolved against actual source code. This skill verifies model claims against the codebase, resolves contradictions with evidence, and produces a more reliable assessment than any single model.
npx claudepluginhub petekp/agent-skills --plugin literate-guideThis skill uses the workspace's default tool permissions.
Combine outputs from multiple AI models into a verified, comprehensive assessment by cross-referencing claims against the actual codebase.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Combine outputs from multiple AI models into a verified, comprehensive assessment by cross-referencing claims against the actual codebase.
Models hallucinate and contradict each other. The source code is the source of truth. Every significant claim must be verified before inclusion in the final assessment.
Parse each model's output and extract discrete claims:
Tag each claim with its source model.
Group semantically equivalent claims:
Create canonical phrasing. Track which models mentioned each.
For each factual claim or identified issue:
CLAIM: "The auth middleware doesn't check token expiry"
VERIFY: Read the auth middleware file
FINDING: [Confirmed | Refuted | Partially true | Cannot verify]
EVIDENCE: [Quote relevant code or explain why claim is wrong]
Use Grep, Glob, and Read tools to locate and examine relevant code. Do not trust model claims without verification.
When models contradict each other:
CONFLICT: Model A says "uses SHA-256", Model B says "uses MD5"
INVESTIGATION: Read crypto.js lines 45-60
RESOLUTION: Model B is correct - line 52 shows MD5 usage
EVIDENCE: `const hash = crypto.createHash('md5')`
Produce a final document that:
# Synthesized Assessment: [Topic]
## Summary
[2-3 sentences describing the verified findings]
## Verified Findings
### Confirmed Issues
| Issue | Severity | Evidence | Models |
|-------|----------|----------|--------|
| [Issue] | High/Med/Low | [file:line or quote] | Claude, GPT |
### Refuted Claims
| Claim | Source | Reality |
|-------|--------|---------|
| [What model said] | GPT-4 | [What code actually shows] |
### Unverifiable Claims
| Claim | Source | Why Unverifiable |
|-------|--------|------------------|
| [Claim] | Claude | [Requires runtime testing / external system / etc.] |
## Consensus Recommendations
[Items where 2+ models agree AND verification supports the suggestion]
## Unique Insights Worth Considering
[Valuable suggestions from single models that weren't contradicted]
## Conflicts Resolved
| Topic | Model A | Model B | Verdict | Evidence |
|-------|---------|---------|---------|----------|
| [Topic] | [Position] | [Position] | [Which is correct] | [Code reference] |
## Action Items
### Critical (Verified, High Impact)
- [ ] [Item] — Evidence: [file:line]
### Important (Verified, Medium Impact)
- [ ] [Item] — Evidence: [file:line]
### Suggested (Unverified but Reasonable)
- [ ] [Item] — Source: [Models]
Always verify:
Trust but note source:
Mark as unverifiable: