Detects and fetches AI-optimized documentation formats (llms.txt, claude.txt) from documentation sites
Detects and fetches AI-optimized documentation formats like llms.txt and claude.txt from documentation websites.
npx claudepluginhub squirrelsoft-dev/doc-fetcherThis skill inherits all available tools. When active, it can use any tool Claude has access to.
I specialize in detecting and fetching AI-optimized documentation formats like llms.txt and claude.txt from documentation websites.
I help the doc-fetcher plugin find the most AI-friendly documentation format available:
/llms.txt or /llms-full.txt at the site root/claude.txtA standard format designed specifically for AI consumption:
# Library Name
> Version: 1.0.0
> Last Updated: 2025-01-17
## Overview
[Concise description optimized for AI understanding]
## Installation
[Installation instructions]
## Quick Start
[Getting started guide]
## API Reference
[Complete API documentation]
## Examples
[Code examples with context]
Benefits:
Claude-specific documentation format:
# Library Name for Claude
This documentation is optimized for Claude AI assistant.
[Documentation structured specifically for Claude's capabilities]
When given a documentation URL, I check in this order:
https://example.com/llms-full.txt (comprehensive version)https://example.com/llms.txt (standard version)https://example.com/claude.txt (Claude-specific)https://example.com/.well-known/llms.txt (alternative location)https://example.com/docs/llms.txt (docs subdirectory)I'm automatically invoked when you run:
/fetch-docs <library>
I run first and check for AI-optimized formats before falling back to web crawling.
You can also invoke me directly through the skill:
"Check if Next.js has an llms.txt file"
"Find AI-optimized docs for Supabase"
"Does https://react.dev have claude.txt?"
If I find an AI-optimized documentation file, I return:
{
"found": true,
"type": "llms.txt",
"url": "https://nextjs.org/llms.txt",
"size_bytes": 524288,
"version": "15.0.3",
"last_updated": "2025-01-15",
"content_preview": "# Next.js 15.0.3...",
"should_use": true,
"reason": "AI-optimized format available, more efficient than crawling"
}
If not found:
{
"found": false,
"checked_urls": [
"https://example.com/llms.txt",
"https://example.com/claude.txt",
"https://example.com/.well-known/llms.txt"
],
"fallback": "sitemap.xml",
"reason": "No AI-optimized format found, falling back to web crawling"
}
When I find an AI-optimized file:
User: /fetch-docs nextjs
Doc Fetcher: Invoking llms-txt-finder skill...
llms-txt-finder:
✓ Checking https://nextjs.org/llms.txt
✓ Found! (524 KB)
✓ Version: 15.0.3
✓ Last updated: 2025-01-15
Recommendation: Use llms.txt instead of crawling
Benefits:
- 1 file vs ~234 pages to crawl
- Pre-optimized for AI
- Faster download (5 seconds vs 4 minutes)
Doc Fetcher: Using llms.txt...
✓ Downloaded and cached
✓ Generated skill: nextjs-15-expert
I validate AI-optimized documentation files for:
If validation fails, I recommend falling back to crawling:
⚠ Found llms.txt but validation failed
Issue: File is only 2 KB (likely incomplete)
Recommendation: Fall back to sitemap.xml crawling
I maintain awareness of popular libraries that provide AI-optimized documentation:
/llms.txt ✓/llms.txt ✓/claude.txt ✓/llms.txt ✓This list helps me make intelligent guesses about where to look.
If you maintain a library, I can help you create an llms.txt file:
"Help me create an llms.txt file for my library"
I'll generate a template following best practices:
# Your Library Name
> Version: 1.0.0
> Last Updated: 2025-01-17
> Repository: https://github.com/you/your-lib
> Documentation: https://docs.yourlib.com
## Overview
[2-3 sentence description of what your library does]
## Installation
[Installation commands for different package managers]
## Quick Start
[Minimal example to get started]
## Core Concepts
[Key concepts users need to understand]
## API Reference
[Complete API documentation]
## Examples
[Common use cases with code examples]
## Advanced Usage
[Advanced patterns and techniques]
## Troubleshooting
[Common issues and solutions]
## Changelog
[Recent changes and migration notes]
Using AI-optimized documentation dramatically improves performance:
| Metric | Web Crawling | llms.txt |
|---|---|---|
| Files fetched | 100-500 | 1 |
| Time to fetch | 2-10 min | 5-30 sec |
| Network requests | 100-500 | 1 |
| Size on disk | 5-50 MB | 0.5-5 MB |
| Token efficiency | Variable | Optimized |
I handle various error scenarios:
Configure my behavior in doc-fetcher-config.json:
{
"llms_txt": {
"enabled": true,
"check_locations": [
"/llms-full.txt",
"/llms.txt",
"/claude.txt",
"/.well-known/llms.txt",
"/docs/llms.txt"
],
"max_size_bytes": 52428800,
"validation_strict": true,
"prefer_over_crawling": true
}
}
I use these tools to accomplish my tasks:
Checking for AI-optimized docs at https://nextjs.org...
✓ https://nextjs.org/llms.txt
Status: 200 OK
Size: 524 KB
Content-Type: text/plain
Last-Modified: 2025-01-15
✓ Validation passed
- Valid markdown format
- Contains version: 15.0.3
- Comprehensive content
- Reasonable size
Recommendation: Use llms.txt
Benefits: 40x faster than crawling 234 pages
Checking for AI-optimized docs at https://example.com...
✗ https://example.com/llms-full.txt (404)
✗ https://example.com/llms.txt (404)
✗ https://example.com/claude.txt (404)
✗ https://example.com/.well-known/llms.txt (404)
No AI-optimized documentation found.
Falling back to sitemap.xml or web crawling.
doc-indexer skill - Main documentation crawling logicdoc-crawler agent - Advanced web crawling for non-standard sites/fetch-docs command - Primary entry point that uses this skillSearch, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.