From llm-externalizer
Use when offloading file analysis to external LLMs. Trigger with "analyze files", "scan folder", "check imports", "compare files", "batch check".
npx claudepluginhub emasoft/emasoft-plugins --plugin llm-externalizer[task-description] [<file-or-folder-paths>...]This skill uses the workspace's default tool permissions.
Offload bounded analysis tasks to cheaper external LLMs via MCP tools (`mcp__llm-externalizer__*`). Supports local backends (LM Studio, Ollama) and remote (OpenRouter with ensemble mode).
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Processes PDFs: extracts text/tables/images, merges/splits/rotates pages, adds watermarks, creates/fills forms, encrypts/decrypts, OCRs scans. Activates on PDF mentions or output requests.
Share bugs, ideas, or general feedback.
Offload bounded analysis tasks to cheaper external LLMs via MCP tools (mcp__llm-externalizer__*). Supports local backends (LM Studio, Ollama) and remote (OpenRouter with ensemble mode).
llm-externalizer-config skill)Copy this checklist and track your progress:
input_files_paths or folder_path — never paste contentinstructionsUse when you need to analyze files without consuming orchestrator context, scan a codebase, compare files, or check imports. Do NOT use for surgical edits or tasks needing real-time tool access.
.md files EXCLUDED by default. Pass instructions for a semantic search to include them. Structural validation → CPV / claude plugin validate ., not LLM.check_against_specs with an explicit spec. "Already implemented?" → search_existing_implementations.READ THIS — common misconception: answer_mode controls how reports are written to disk, NOT how many files the LLM sees per request. The LLM never sees the whole set at once. Files are batched into requests of typically 1–5 files each (FFD bin packing into ~400 KB batches, or one group per request when ---GROUP:id--- markers are supplied). In ensemble mode each file gets 3 responses from 3 LLMs; in free and local mode each file gets 1 response.
For cross-file analysis across a whole codebase use search_existing_implementations — each file is compared against a REFERENCE.
Reports are .md files in reports_dev/llm_externalizer/.
answer_mode : 0 — ONE REPORT PER FILE. One .md per input file; MCP splits each batch response by ## File: markers. Best for per-file fan-out.
answer_mode : 1 — ONE REPORT PER GROUP. One .md per group. Without ---GROUP:id--- markers MCP auto-groups by subfolder → extension → namespace → basename → shared imports (max 1 MB per group). Best for per-module review.
answer_mode : 2 — SINGLE REPORT. Everything merged into one .md. Best for a top-level audit summary.
Defaults: scan_folder=0, chat / code_task / check_*=2, search_existing_implementations=2.
| Error | Cause | Resolution |
|---|---|---|
| Timeout | Long reasoning on large file | Automatic — reasoning models get extended time |
| Auth error | API key not set | Run discover; set env var |
| Empty response | File exceeds model limit | Split files or change model |
{"tool": "code_task", "folder_path": "/path/to/src", "extensions": [".ts"],
"instructions": "Find bugs. Node.js Express API."}
{"tool": "compare_files", "input_files_paths": ["/path/old.ts", "/path/new.ts"],
"instructions": "Focus on API breaking changes"}
{"tool": "search_existing_implementations",
"feature_description": "rate-limited HTTP client with retry backoff",
"folder_path": "/path/to/codebase",
"source_files": ["/path/to/pr/http_client.py"]}