From craft-skills
Dispatch as a haiku agent to run local LLM tasks. Handles the full lifecycle: availability check, model loading, script execution, result collection, and model unloading. Other skills should dispatch this as an agent rather than running LLM bash commands directly.
npx claudepluginhub alexiolan/craft-skills --plugin craft-skillsThis skill uses the workspace's default tool permissions.
Full lifecycle wrapper for local LLM operations. **Other skills run LLM bash commands directly in the main conversation** — no dedicated agents.
Consults 100+ external AI models via LiteLLM for architectural reviews, security audits, deep code analysis, or extended reasoning on codebases. Runs async with session management and CLI status checks.
Generates llms.txt file from repo structure per llmstxt.org spec, with project summary and categorized file links for LLM repo navigation. Invoke manually via /create-llms or auto when relevant.
Generates or audits llms.txt files per llmstxt.org for repositories. Creates CLAUDE.md/AGENTS.md to guide AI coding agents on build, test, and conventions.
Share bugs, ideas, or general feedback.
Full lifecycle wrapper for local LLM operations. Other skills run LLM bash commands directly in the main conversation — no dedicated agents.
Calling skills run the bash commands directly using the Bash tool. The typical pattern:
CRAFT_SCRIPTS=$(find ...) && curl -s --max-time 2 http://127.0.0.1:1234 ... && echo "LLM_AVAILABLE:$CRAFT_SCRIPTS" || echo "LLM_UNAVAILABLE"bash "$CRAFT_SCRIPTS/llm-agent.sh" "<task>" <working-dir> (with run_in_background: true)bash "$CRAFT_SCRIPTS/llm-unload.sh"Each calling skill has the exact commands inline. This skill serves as the detailed reference.
Task types:
explore "<task>" <working-directory> — Autonomous investigation (LLM reads files itself, saves the most tokens)review <file-path> "<focus>" — Single file review with thinking modeanalyze "<task>" <file1> <file2> ... — Multi-file analysisThe user input is: $ARGUMENTS
/llm-review src/domain/auth/feature/LoginPage.tsx "security"The scripts directory is provided at session start (bootstrap context) as craft-skills scripts directory: <path>. Use that path if available. Otherwise fall back to:
CRAFT_SCRIPTS=$(find ~/.claude/plugins -name "llm-agent.sh" -path "*/craft-skills/*" -exec dirname {} \; 2>/dev/null | head -1)
If neither is available, return: "LLM scripts not found — craft-skills plugin may not be installed."
curl -s --max-time 2 ${LLM_URL:-http://127.0.0.1:1234} > /dev/null 2>&1 && echo "LLM_AVAILABLE" || echo "LLM_UNAVAILABLE"
If LLM_UNAVAILABLE, return: "LLM_UNAVAILABLE: LM Studio server not running."
Note: Scripts auto-detect and fix context length (reload with 64K if loaded with less). No manual check needed.
Choose the script based on task type:
review — file content passed to LLM with thinking mode:
bash "$CRAFT_SCRIPTS/llm-review.sh" <file-path> "<focus>"
analyze — multiple files analyzed together:
bash "$CRAFT_SCRIPTS/llm-analyze.sh" "<task>" <file1> <file2> ...
explore — autonomous agent with file access tools (saves the most tokens):
bash "$CRAFT_SCRIPTS/llm-agent.sh" "<task description>" <working-directory>
If the script returns LLM_ERROR or exits non-zero, report the error — do not retry.
If the script returns LLM_THINKING_OVERFLOW, report: "Model used all tokens on reasoning. File may be too large for review mode — try explore mode instead."
Before returning findings, filter out known false positives:
bash "$CRAFT_SCRIPTS/llm-unload.sh"
Skip unloading if the caller passes Keep loaded: true — this means another LLM step is expected soon.
Return the triaged findings to the caller. Be concise — the caller will triage further against their conversation context.
| Variable | Default | Description |
|---|---|---|
LLM_URL | http://127.0.0.1:1234 | LM Studio server URL |
LLM_MODEL | qwen/qwen3.5-35b-a3b | Model identifier |