From formal-verify
Conduct exhaustive, citation-rich research on any topic using all available tools: web search, browser automation, documentation APIs, and codebase exploration. Use when asked to "research X", "find out about Y", "investigate Z", "deep dive into...", "what's the current state of...", "compare options for...", "fact-check this...", or any request requiring comprehensive, accurate information from multiple sources. Prioritizes accuracy over speed, cross-references claims across sources, identifies conflicts, and provides full citations. Outputs structured findings with confidence levels and source quality assessments.
npx claudepluginhub petekp/agent-skills --plugin literate-guideThis skill uses the workspace's default tool permissions.
Systematic methodology for conducting exhaustive, accurate research using all available tools. Prioritizes correctness over speed.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Systematic methodology for conducting exhaustive, accurate research using all available tools. Prioritizes correctness over speed.
Use these tools in combination based on the research topic:
| Tool | Best For | Limitations |
|---|---|---|
| WebSearch | Current events, recent information, broad topic discovery | Results may be outdated, SEO-influenced |
| WebFetch | Reading specific URLs, extracting detailed content | Requires known URL |
| Playwright browser | Interactive sites, paywalled content (if logged in), complex navigation | Slower, requires more tokens |
| Context7/MCP docs | Library/framework documentation | Only indexed libraries |
| OpenAI docs MCP | OpenAI API specifics | OpenAI only |
| Grep/Glob/Read | Codebase research, finding implementations | Local files only |
Before researching, clarify:
Ask clarifying questions if scope is ambiguous. Use AskUserQuestion for structured choices when multiple research directions are possible.
Cast a wide net to find relevant sources:
1. WebSearch with multiple query variations
- Try 3-5 different phrasings of the core question
- Include technical terms AND plain language
- Search for "[topic] official documentation"
- Search for "[topic] research paper" or "[topic] study"
2. Identify authoritative sources from results
- Official documentation sites
- Academic papers / research institutions
- Industry standards bodies
- Recognized experts in the field
3. Check specialized tools
- Context7 for library/framework docs
- OpenAI docs MCP for OpenAI-specific topics
- GitHub/codebase for implementation details
Source discovery heuristics:
For each promising source:
Reading strategy for different source types:
| Source Type | Strategy |
|---|---|
| Documentation | Read relevant sections fully; note version/date |
| Research paper | Abstract, conclusion, methodology in that order |
| News article | Check publication date, author credentials, cited sources |
| Blog post | Verify claims independently; note author's expertise |
| Forum/Q&A | Check answer date, votes, accepted status; verify independently |
For each major claim:
Conflict resolution:
Structure findings clearly:
## Research Summary: [Topic]
### Key Findings
1. **[Finding 1]**
- [Specific fact with citation]
- [Supporting evidence]
- Confidence: High/Medium/Low
- Sources: [1], [2]
2. **[Finding 2]**
...
### Conflicts & Uncertainties
- [Area of disagreement]: Source A claims X [1], while Source B claims Y [2]. [Analysis of why they differ]
### Source Quality Assessment
| # | Source | Type | Authority | Recency | Notes |
|---|--------|------|-----------|---------|-------|
| 1 | [URL] | Official docs | High | 2024-01 | Primary source |
| 2 | [URL] | Research paper | High | 2023-06 | Peer-reviewed |
| 3 | [URL] | Blog | Medium | 2024-03 | Author is [expert] |
### Gaps & Limitations
- [What couldn't be verified]
- [Areas needing more research]
### Citations
[1] [Full citation with URL]
[2] [Full citation with URL]
...
Assign confidence to each finding:
| Level | Criteria |
|---|---|
| High | 3+ independent authoritative sources agree; no conflicts |
| Medium | 2 sources agree, or 1 highly authoritative source; minor conflicts |
| Low | Single source, or significant conflicts between sources |
| Uncertain | Sources conflict significantly; unable to determine truth |
Always state confidence explicitly. "I'm not sure" is a valid research finding.
Use inline citations with numbered references:
The API rate limit is 60 requests per minute [1], though this can be increased
for enterprise accounts [2].
---
[1] OpenAI API Documentation, "Rate Limits", https://platform.openai.com/docs/guides/rate-limits, accessed 2024-01-15
[2] OpenAI Enterprise FAQ, https://openai.com/enterprise, accessed 2024-01-15
Citation must include:
| Anti-Pattern | Why It's Bad | Instead |
|---|---|---|
| Single source | No verification | Always find 2+ sources |
| Uncited claims | Unverifiable | Every fact needs a source |
| Assuming first result is best | SEO != accuracy | Evaluate source quality |
| Ignoring conflicts | Hides uncertainty | Report all positions |
| Outdated sources | Information decay | Check publication dates |
| Trusting AI summaries | May hallucinate | Go to primary sources |
| Stopping early | Incomplete picture | Research until diminishing returns |
Research is complete when:
For detailed guidance on specific scenarios: