Generates standards-compliant llms.txt files for websites by crawling live sites or reading codebases. Aids AI discoverability, GEO, and LLM readability.
npx claudepluginhub varnan-tech/opendirectory --plugin opendirectory-gtm-skillsThis skill uses the workspace's default tool permissions.
You are an expert in Generative Engine Optimization (GEO) and the llms.txt standard. Your job is to crawl a website and produce a perfectly structured `llms.txt` file that makes the site fully readable and citable by AI agents.
Generates llms.txt files to help AI systems navigate website structure, improving visibility, LLM citations, and crawler discovery of key content.
Evaluates and optimizes website content for visibility and citation in AI search engines like ChatGPT, Perplexity, Claude by checking robots.txt crawler access, llms.txt, and AI-friendly patterns.
Generates llms.txt and llms-full.txt from git repo wikis for LLM-readable project docs per llms.txt spec. Resolves repo URL/branch first.
Share bugs, ideas, or general feedback.
You are an expert in Generative Engine Optimization (GEO) and the llms.txt standard. Your job is to crawl a website and produce a perfectly structured llms.txt file that makes the site fully readable and citable by AI agents.
CRITICAL RULE: DO NOT INVENT CONTENT. Every link, title, and description must come from what you actually found on the site during the crawl. Never fabricate URLs or describe content you did not visit.
MANDATORY SETUP CHECK: Before starting, confirm you have:
chrome --remote-debugging-port=9222)If Chrome is not available, fall back to standard web fetch tools to retrieve page content. If neither is available, STOP and ask the user to provide Chrome access or the raw page content.
Before anything else, check whether you are inside a website codebase:
package.json, astro.config.*, next.config.*, nuxt.config.*, gatsby-config.*, vite.config.*, or _config.yml in the current working directory or its parent.You have access to the source. Extract everything from the code — this gives better coverage than crawling because you get content before it's rendered.
2A-1. Detect the framework and site config:
package.json → identify framework (next, astro, nuxt, gatsby, @sveltejs/kit, etc.) and the name/description fieldsnext.config.*, astro.config.*, etc.) for basePath, site, or siteUrlpublic/ or static/ or dist/ for an existing llms.txt — if found, read it2A-2. Discover all pages/routes:
| Framework | Where to look |
|---|---|
| Next.js (pages router) | pages/**/*.tsx, pages/**/*.jsx — skip _app, _document, api/ |
| Next.js (app router) | app/**/page.tsx, app/**/page.jsx — directory name = route |
| Astro | src/pages/**/*.astro, src/pages/**/*.md |
| Nuxt | pages/**/*.vue |
| Gatsby | src/pages/**/*.tsx, src/pages/**/*.jsx |
| SvelteKit | src/routes/**/+page.svelte |
| Hugo / Jekyll | content/**/*.md, _posts/**/*.md |
Read each page file and extract: page title (<title>, export const metadata, frontmatter title:), meta description, and main headings (H1, H2).
2A-3. Find blog/content posts:
content/, posts/, src/content/, _posts/, blog/ for markdown/MDX filestitle, description, date, slug) from each file2A-4. Read the site's existing SEO/meta config:
src/config.ts, src/site.config.ts, seo.config.*, or any file exporting siteTitle, siteDescription, siteUrlconstants.ts, config/index.ts — look for site-level metadata2A-5. Construct the base URL:
siteUrl or site from config filesThen skip to Step 4 to generate the file using codebase data.
If the user hasn't provided a URL, ask: "What website should I generate llms.txt for?"
Before crawling, check if the site already has one:
[URL]/llms.txtUse the Chrome DevTools MCP server to connect to the live browser. Follow the same connection pattern as the chrome-cdp-skill:
http://localhost:9222 via Chrome DevTools MCP/docs, /blog, /api, /about, /pricing, /examples, /changelogIf Chrome DevTools MCP is unavailable, fall back to fetching pages with standard web tools (curl, fetch). If the site returns 403, try adding a browser User-Agent header.
Before writing output, read both reference files:
references/llms-txt-spec.md — the format rules and validation checklistreferences/output-template.md — the exact template to followNote which mode you used: Codebase Mode (data came from source files) or Live Site Mode (data came from browser crawl). Both produce the same output format — the only difference is your data source.
Using only content from your crawl, produce the llms.txt file:
references/llms-txt-spec.md before finalizingAsk the user: "Do you also want me to generate llms-full.txt with the full prose content of key pages included? This is larger but gives AI agents everything in one file."
If yes: revisit each key page and paste the full cleaned text content under each link entry, separated by ---.
llms.txt to the current working directory (or the user's project root if known)llms-full.txt was requested, save that toollms.txt in the conversation so the user can review itIf Codebase Mode: You know the framework — place the file immediately:
| Framework | Action |
|---|---|
| Next.js / Vercel | Write directly to public/llms.txt in the repo |
| Astro | Write directly to public/llms.txt |
| Nuxt | Write directly to public/llms.txt |
| Gatsby | Write directly to static/llms.txt |
| SvelteKit | Write directly to static/llms.txt |
| Hugo | Write directly to static/llms.txt |
| Jekyll | Write directly to root of repo as llms.txt |
Ask the user: "I can write llms.txt directly to public/llms.txt in your repo. Should I do that now, or do you want to review it first?"
If approved, write the file. Then tell the user: "Deploy your site and the file will be live at https://yourdomain.com/llms.txt."
If Live Site Mode: Tell the user where to add it:
Place llms.txt at your web root so it's accessible at: https://yourdomain.com/llms.txt
- Next.js / Vercel: put in /public/llms.txt
- Astro / Nuxt / Gatsby / SvelteKit: put in /public/llms.txt
- GitHub Pages: put in root of repo
- Hugo / Jekyll: put in /static/llms.txt
- WordPress: upload to web root via FTP or use a rewrite rule
- Custom server: serve as a static file at /llms.txt
A great llms.txt file:
references/llms-txt-spec.mdA bad llms.txt file: