Research a library/topic and generate comprehensive llms_txt documentation
Research any library or tool and generate comprehensive llms.txt documentation with 30+ practical code examples. Use this when you need to create reference-quality documentation for AI assistants to work with a specific technology.
/plugin marketplace add cruzanstx/daplug/plugin install daplug@cruzanstx<library-name>You are tasked with gathering requirements for llms_txt documentation and creating an executable prompt that can be run via /run-prompt.
The user wants to create llms_txt documentation for: $ARGUMENTS
Before starting, determine the llms_txt repository location:
1. Check for existing setting in ~/.claude/CLAUDE.md:
PLUGIN_ROOT=$(jq -r '.plugins."daplug@cruzanstx"[0].installPath' ~/.claude/plugins/installed_plugins.json)
CONFIG_READER="$PLUGIN_ROOT/skills/config-reader/scripts/config.py"
LLMS_TXT_DIR=$(python3 "$CONFIG_READER" get llms_txt_dir)
if [ -z "$LLMS_TXT_DIR" ] || [ ! -d "$LLMS_TXT_DIR" ]; then
# Proceed to AskUserQuestion (Step 2)
:
fi
2. If not found OR directory does not exist:
Use AskUserQuestion tool:
3. If user chooses "Clone repository":
# Determine sensible default location
if [[ "$PWD" == /storage/projects/* ]]; then
DEFAULT_CLONE_DIR="/storage/projects/docker/llms_txt"
else
DEFAULT_CLONE_DIR="$HOME/projects/llms_txt"
fi
# Ask user to confirm or customize location, then set:
# LLMS_TXT_DIR="<user-chosen-path>"
# Clone the repo
git clone https://gitlab.local/local/llms_txt.git "$LLMS_TXT_DIR"
# Verify success
if [ -f "$LLMS_TXT_DIR/AGENTS.md" ] || [ -f "$LLMS_TXT_DIR/INDEX.md" ]; then
echo "✓ Repository cloned successfully"
else
echo "⚠ Clone may have failed - missing expected files"
fi
4. If user chooses "Specify existing path":
LLMS_TXT_DIR="<user-provided-path>"
if [ ! -d "$LLMS_TXT_DIR" ]; then
echo "⚠ Path does not exist. Ask again or offer Create new directory."
fi
5. If user chooses "Create new directory":
LLMS_TXT_DIR="<user-provided-path>"
mkdir -p "$LLMS_TXT_DIR/prompts/completed"
6. Save setting to ~/.claude/CLAUDE.md:
python3 "$CONFIG_READER" set llms_txt_dir "$LLMS_TXT_DIR" --scope user
7. All prompts will be created in: $LLMS_TXT_DIR/prompts/
8. Read the repository structure:
$LLMS_TXT_DIR/AGENTS.md to understand the directory organization9. Determine the language/category:
python/ - Python libraries and frameworksgo/ - Go libraries and toolsjavascript/ - JavaScript librariestypescript/ - TypeScript frameworksframeworks/ - Cross-platform frameworkstools/ - Development tools (CLI, editors, etc.)10. Check for subdirectories:
python/ai/, go/cli/, go/testing/)python/pydantic_ai/, go/charmbracelet/, frameworks/pocketbase/)11. Construct the full path:
$LLMS_TXT_DIR/<language> or <tools/frameworks><category-or-library>/<library-name>.llms-full.txtBefore writing the prompt, propose an advanced deep-dive file set when useful:
Scan official docs quickly to identify high-value advanced topics (e.g., configuration, CLI reference, plugins/extensions, MCP/tooling, hooks, troubleshooting, auth, output formats, integrations).
Propose a focused list of advanced files with rationale and filenames.
Ask for user approval and confirmation on:
tool-name-advanced.llms-full.txt or tool-name-<topic>-advanced.llms-full.txt)Only include advanced files in the prompt after user confirms.
Deep-Dive Naming Guidance
-advanced suffix for topic-specific deep dives.gemini-cli-configuration-advanced.llms-full.txtgemini-cli-tools-advanced.llms-full.txtgemini-cli-mcp-advanced.llms-full.txtDeep-Dive Proposal Checklist
After gathering all information, create a prompt file in $LLMS_TXT_DIR/prompts/:
# Find highest existing number in llms_txt prompts directory
NEXT_NUM=$(ls "$LLMS_TXT_DIR/prompts/"*.md "$LLMS_TXT_DIR/prompts/completed/"*.md 2>/dev/null | sed 's|.*/||' | grep -oE '^[0-9]{3}' | sort -n | tail -1)
if [ -z "$NEXT_NUM" ]; then
NEXT_NUM="001"
else
NEXT_NUM=$(printf "%03d" $((10#$NEXT_NUM + 1)))
fi
Save to $LLMS_TXT_DIR/prompts/{NEXT_NUM}-create-llms-txt-{library-name}.md:
<objective>
Research {LIBRARY_NAME} thoroughly and generate a comprehensive, well-structured llms_txt file that provides all essential information an AI assistant would need to work with this technology.
</objective>
<target_file>
{FULL_PATH} (e.g., {LLMS_TXT_DIR}/{category}/{subdirectory}/{library-name}.llms-full.txt)
</target_file>
<research_phase>
1. **Check for existing llms.txt files**:
- Search for official llms.txt or llms-full.txt files from the library's website
- Check GitHub repositories for existing documentation
- Look for community-maintained llms.txt files
2. **Gather primary sources**:
- Official documentation (GitHub README, docs site)
- Package/API documentation (pkg.go.dev, npm, PyPI, etc.)
- Comprehensive guides from reputable sources (Better Stack, official tutorials)
3. **Use WebSearch and WebFetch**:
WebSearch: "{LIBRARY_NAME} llms.txt" WebSearch: "{LIBRARY_NAME} official documentation" WebSearch: "{LIBRARY_NAME} complete guide tutorial"
4. **Fetch comprehensive content**:
- Official GitHub repository README
- Official documentation site
- 1-2 high-quality tutorial/guide sites
- API reference documentation
**Research priorities:**
- Official sources (GitHub, official docs)
- Comprehensive guides (Better Stack, LogRocket, etc.)
- API/package documentation sites
- Well-maintained community resources
**Extract:**
- All features and capabilities
- Installation and setup instructions
- Complete API reference with types/methods
- Usage examples and patterns
- Best practices and performance considerations
- Common patterns and anti-patterns
- Configuration options
- Integration examples
</research_phase>
<content_structure>
Structure the llms_txt file with these sections:
```markdown
# {LIBRARY_NAME} - [One-line Description]
> Official Repository: [URL]
> Documentation: [URL]
> Version: [Latest stable version]
> License: [License type]
## Overview
[What it is, what problems it solves, key philosophy]
## Installation
[Package manager commands, setup steps]
## Core Features
[Bullet list of main capabilities]
## Basic Usage
### [Feature 1]
[Code examples with explanations]
### [Feature 2]
[Code examples with explanations]
## Advanced Features
### [Advanced Feature 1]
[Detailed examples]
### [Advanced Feature 2]
[Detailed examples]
## Configuration
[Environment variables, config files, options]
## Best Practices
[Performance tips, recommended patterns, what to avoid]
## Critical Implementation Notes
[Important gotchas, common mistakes, must-know information]
## Common Patterns
[Real-world usage examples]
## Comparison/Context
[How it compares to alternatives, when to use it]
## Resources
- Official links
- Community resources
- Related tools
---
**Generated**: [Date]
**Source**: [List of primary sources]
**Maintainer**: [Original author/org]
</content_structure>
<quality_requirements>
{IF_ADVANCED_FILES} <advanced_files> Also create these advanced deep-dive files: {LIST_OF_ADVANCED_FILES_WITH_TOPICS}
Each advanced file should:
<special_case_instructions> {SPECIAL_CASE_CONTENT - see templates below} </special_case_instructions>
<verification> Before declaring complete, verify: - [ ] File saved to {FULL_PATH} - [ ] Directory structure matches existing organization - [ ] Includes overview, installation, core features - [ ] Contains 30+ code examples covering major use cases - [ ] Documents all important types/methods/functions - [ ] Includes configuration options - [ ] Contains best practices and gotchas - [ ] Lists all source URLs - [ ] Well-formatted markdown with clear hierarchy - [ ] Covers both basic and advanced usage - [ ] Includes comparison/context section if applicableOutput: <verification>VERIFICATION_COMPLETE</verification> </verification>
### Special Case Templates
Include the appropriate special case section based on library type:
**For Go Libraries:**
```xml
<special_case_instructions>
- Include pkg.go.dev documentation
- Cover interfaces, types, methods
- Show testing patterns
- Include module/import paths
</special_case_instructions>
For JavaScript/TypeScript Libraries:
<special_case_instructions>
- Include npm package info
- Cover TypeScript types if applicable
- Show both CommonJS and ESM usage
- Include bundler considerations
</special_case_instructions>
For Python Libraries:
<special_case_instructions>
- Include PyPI information
- Cover class hierarchies
- Show async patterns if applicable
- Include virtual environment setup
</special_case_instructions>
For CLI Tools:
<special_case_instructions>
- Include installation methods (brew, apt, binary)
- Cover all major commands
- Show configuration file formats
- Include shell completion info
</special_case_instructions>
For Frameworks:
<special_case_instructions>
- Cover architecture/philosophy
- Include project structure patterns
- Show lifecycle/hooks
- Include plugin/extension systems
</special_case_instructions>
After creating the prompt file, present the decision tree:
<detection_logic> Before presenting options:
Check ai_usage_awareness setting (feature flag):
PLUGIN_ROOT=$(jq -r '.plugins."daplug@cruzanstx"[0].installPath' ~/.claude/plugins/installed_plugins.json)
CONFIG_READER="$PLUGIN_ROOT/skills/config-reader/scripts/config.py"
AI_USAGE_AWARENESS=$(python3 "$CONFIG_READER" get ai_usage_awareness)
If setting not found in either location: Ask the user: "Would you like to enable AI usage awareness? This shows quota percentages for each model and suggests alternatives when models are near their limits.
Choose (1-2): _"
Based on response, set in ~/.claude/CLAUDE.md under <daplug_config>:
ai_usage_awareness: enabledai_usage_awareness: disabledpython3 "$CONFIG_READER" set ai_usage_awareness "enabled" --scope user
# or
python3 "$CONFIG_READER" set ai_usage_awareness "disabled" --scope user
If setting is "disabled": Skip step 2, don't show usage info, proceed directly to step 3.
Check AI CLI usage (only if ai_usage_awareness is enabled or unset-but-user-said-yes):
npx cclimits --json 2>/dev/null
Parse the JSON to extract usage percentages:
claude: Check claude.five_hour.used and claude.seven_day.usedcodex: Check codex.primary_window.used and codex.secondary_window.usedgemini: Check gemini.models.* for each model's usagezai: Check zai.token_quota.percentageUsage thresholds:
< 70% → Available (show normally)70-90% → Warning (show with ⚠️)> 90% → Near limit (show with 🔴)100% or error → Unavailable (show with ❌, skip in recommendations)Read preferred_agent from <daplug_config> in CLAUDE.md:
PREFERRED_AGENT=$(python3 "$CONFIG_READER" get preferred_agent)
PREFERRED_AGENT=${PREFERRED_AGENT:-claude}
</detection_logic>
<available_models> All available models for /daplug:run-prompt --model:
Claude Family: (check: claude.five_hour.used, claude.seven_day.used)
claude - Claude sub-agent in current context (best for complex reasoning, multi-step tasks)OpenAI Codex Family: (check: codex.primary_window.used, codex.secondary_window.used)
codex - GPT-5.1-code (fast, good for straightforward coding)codex-high - GPT-5.1-code with higher token limitscodex-xhigh - GPT-5.1-code with maximum token limits (complex projects)Google Gemini Family: (check: gemini.models.<model>.used for each)
gemini - Gemini 3 Flash Preview (default, best coding performance)gemini-high - Gemini 2.5 Pro (higher capability)gemini-xhigh - Gemini 3 Pro Preview (maximum capability)gemini25pro - Gemini 2.5 Pro (stable, capable)gemini25flash - Gemini 2.5 Flash (fast, cost-effective)gemini25lite - Gemini 2.5 Flash Lite (fastest)gemini3flash - Gemini 3 Flash Preview (best coding)gemini3pro - Gemini 3 Pro Preview (most capable)Other Models: (check: zai.token_quota.percentage)
zai - Z.AI GLM-4.7 (good for Chinese language tasks)local - Local model via LMStudio (no quota limits)qwen - Qwen via LMStudio (no quota limits)devstral - Devstral via LMStudio (no quota limits)
</available_models><recommendation_logic> For llms.txt research tasks, recommend models in this order (based on availability):
| Priority | Model | Reason |
|---|---|---|
| 1 | codex-xhigh | Best for large doc research + writing |
| 2 | gemini25pro | Great at comprehensive research |
| 3 | gemini3pro | Most capable Gemini |
| 4 | claude | Excellent reasoning but uses your quota |
| 5 | zai | Good fallback for documentation |
Recommended flags for llms.txt:
--worktree - Isolate the work (can continue working on other things)--loop - Auto-retry if verification fails (ensures quality)If preferred_agent is set AND available, show it as first option.
</recommendation_logic>
Target output: {FULL_PATH} {IF_ADVANCED: Also creating: {LIST_OF_ADVANCED_FILES}}
What's next?
Choose (1-4): _ </presentation>
<action> If user chooses #1: First, run cclimits to get current quota status: ```bash npx cclimits --json 2>/dev/null ```Then present executor options with usage status:
"📊 AI Quota Status: Claude: {X}% (5h) {status} | Codex: {X}% (5h) {status} | Z.AI: {X}% {status}
Gemini models: 3-flash: {X}% {status} | 2.5-pro: {X}% {status} | 3-pro: {X}% {status} 2.5-flash: {X}% {status} | 2.5-lite: {X}% {status}
Execute via:
Claude: {usage status}
Codex (OpenAI): {usage status} 3. codex - GPT-5.1-code standard 4. codex-high - higher token limit 5. codex-xhigh - maximum tokens (Recommended for llms.txt)
Gemini (Google): {show each model's usage} 6. gemini (3-flash) - {X}% used 7. gemini25flash - {X}% used 8. gemini25pro - {X}% used - great for research 9. gemini3pro - {X}% used - most capable
Other: 10. zai - {X}% used 11. local/qwen/devstral - Local models (no quota)
[Show recommendation: "Recommended for llms.txt research: codex-xhigh --worktree --loop"] [If preferred_agent is set and available: "Your preferred agent: {preferred_agent} ✅"]
Additional flags (can combine):
--worktree - Isolated git worktree (recommended: can work on other things)--loop - Auto-retry until verification passes (recommended: ensures quality)--loop --max-iterations N - Limit loop retries (default: 3)Choose (1-11), or type model with flags (e.g., 'codex-xhigh --worktree --loop'): _"
Execute based on selection:
Important (llms_txt prompts live outside the current repo):
--prompt-file "$LLMS_TXT_DIR/prompts/{NUMBER}-create-llms-txt-{library-name}.md" so the executor reads the correct file from any project.--worktree is requested, run from within $LLMS_TXT_DIR so the worktree is created off the llms_txt repo. If you're not already there, ask the user to confirm running from that repo.If user selects Claude (option 1):
Invoke via Skill tool: /daplug:run-prompt {NUMBER} --prompt-file "$LLMS_TXT_DIR/prompts/{NUMBER}-create-llms-txt-{library-name}.md"
If user selects Claude worktree (option 2):
Invoke via Skill tool: /daplug:run-prompt {NUMBER} --prompt-file "$LLMS_TXT_DIR/prompts/{NUMBER}-create-llms-txt-{library-name}.md" --worktree
If user selects any other model (options 3-11):
Invoke via Skill tool: /daplug:run-prompt {NUMBER} --prompt-file "$LLMS_TXT_DIR/prompts/{NUMBER}-create-llms-txt-{library-name}.md" --model {selected_model}
(Add --worktree and/or --loop if user requests)
User can also type custom model names with flags:
/daplug:run-prompt {NUMBER} --prompt-file "$LLMS_TXT_DIR/prompts/{NUMBER}-create-llms-txt-{library-name}.md" --model codex-xhigh --worktree --loop/daplug:run-prompt {NUMBER} --prompt-file "$LLMS_TXT_DIR/prompts/{NUMBER}-create-llms-txt-{library-name}.md" --model gemini25pro --loop
</action>
User: "/daplug:create-llms-txt axios"
Assistant:
1. Discovers $LLMS_TXT_DIR
2. Reads $LLMS_TXT_DIR/AGENTS.md
3. Asks: "Which category does axios belong to?" → User selects "javascript"
4. Asks: "Does this need a subdirectory?" → User selects "No subdirectory"
5. Asks: "Any advanced deep-dive files needed?" → User says "No, just the base file"
6. Creates $LLMS_TXT_DIR/prompts/061-create-llms-txt-axios.md
7. Presents execution options
8. User selects "codex-xhigh --worktree --loop"
9. Invokes: /daplug:run-prompt 061 --prompt-file "$LLMS_TXT_DIR/prompts/061-create-llms-txt-axios.md" --model codex-xhigh --worktree --loop
User: "/daplug:create-llms-txt gemini-cli"
Assistant:
1. Discovers $LLMS_TXT_DIR
2. Asks category → "tools"
3. Asks subdirectory → "No subdirectory"
4. Proposes deep-dive files:
- gemini-cli-configuration-advanced.llms-full.txt
- gemini-cli-tools-advanced.llms-full.txt
- gemini-cli-troubleshooting-advanced.llms-full.txt
5. User approves files 1 and 3
6. Creates $LLMS_TXT_DIR/prompts/062-create-llms-txt-gemini-cli.md (includes advanced file instructions)
7. Presents execution options
8. User selects "gemini3pro --worktree"
9. Invokes: /daplug:run-prompt 062 --prompt-file "$LLMS_TXT_DIR/prompts/062-create-llms-txt-gemini-cli.md" --model gemini3pro --worktree
User: "/daplug:create-llms-txt bubbletea"
Assistant:
1. Discovers $LLMS_TXT_DIR
2. Asks category → "go"
3. Asks subdirectory → "Library/ecosystem subdirectory"
4. Asks subdirectory name → "charmbracelet"
5. Target: $LLMS_TXT_DIR/go/charmbracelet/bubbletea.llms-full.txt
6. Creates $LLMS_TXT_DIR/prompts/063-create-llms-txt-bubbletea.md
7. User runs with codex-xhigh
--loop can work properly--worktree for isolation (user can continue working on other things)--loop to ensure quality (auto-retries if verification fails)--parallel