Recursive Language Model factory — distill repository files into semantic summaries using Ollama for instant context retrieval
npx claudepluginhub richfrem/agent-plugins-skills --plugin rlm-factoryAudit RLM cache coverage — compare ledger against filesystem (offline)
Remove stale and orphaned entries from the RLM Summary Ledger (offline)
High-speed RLM distillation of project documentation using agentic intelligence.
Distill repository files into the RLM Summary Ledger using agentic intelligence (fast) or Ollama (offline batch)
Search the RLM cache for file summaries by keyword (offline — no Ollama needed)
../skills/rlm-init/SKILL.md
../skills/rlm-search/SKILL.md
../skills/rlm-cleanup-agent/SKILL.md
Removes stale and orphaned entries from the RLM Summary Ledger. Use after files are deleted, renamed, or moved to keep the ledger in sync with the filesystem. <example> user: "Clean up the RLM cache after I renamed some files" assistant: "I'll use rlm-cleanup to remove stale entries from the ledger." </example> <example> user: "The RLM ledger has entries for files that no longer exist" assistant: "I'll run rlm-cleanup to prune orphaned entries." </example>
../skills/rlm-distill-agent/SKILL.md
Distills uncached files into the RLM Summary Ledger. You (the agent) ARE the distillation engine. Read each file deeply, write a high-quality 1-sentence summary, inject it via inject_summary.py. Use when files are missing from the ledger and need to be summarized. <example> user: "Summarize these new plugin files into the RLM ledger" assistant: "I'll use rlm-distill to read and summarize each file into the cache." </example> <example> user: "The RLM ledger is missing 40 files -- fill the gaps" assistant: "I'll use rlm-distill to process the missing files." </example>
Start and verify the local Ollama LLM server. Use when Ollama is needed for RLM distillation, seal snapshots, embeddings, or any local LLM inference — and it's not already running. Checks if Ollama is running, starts it if not, and verifies the health endpoint.
Removes stale and orphaned entries from the RLM Summary Ledger. Use after files are deleted, renamed, or moved to keep the ledger in sync with the filesystem. <example> user: "Clean up the RLM cache after I renamed some files" assistant: "I'll use rlm-cleanup-agent to remove stale entries from the ledger." </example> <example> user: "The RLM ledger has entries for files that no longer exist" assistant: "I'll run rlm-cleanup-agent to prune orphaned entries." </example>
Knowledge Curator agent skill for the RLM Factory. Auto-invoked when tasks involve distilling code summaries, querying the semantic ledger, auditing cache coverage, or maintaining RLM hygiene. Supports both Ollama-based batch distillation and agent-powered direct summarization. V2 enforces Concurrency Safety constraints.
Distills uncached files into the Recursive Language Model(RLM) Summary cache Ledger. You (the agent) ARE the distillation engine. Read each file deeply, write a high-quality 1-sentence summary, inject it via inject_summary.py. The purpose is if you read the full file once and produce a great summary once it will avoid the need to read the file every time you need to know what the script does or what the details of the file are. most cases the RLM summary should be sufficient. Use when files are missing from the ledger and need to be summarized. <example> user: "Summarize these new plugin files into the RLM ledger" assistant: "I'll use rlm-distill-agent to read and summarize each file into the cache." </example> <example> user: "The RLM ledger is missing 40 files -- fill the gaps" assistant: "I'll use rlm-distill-agent to process the missing files." </example>
Distill repository files into the RLM Summary Ledger using agentic intelligence (fast) or Ollama (offline batch)
Interactive RLM cache initialization. Use when: setting up a new project's semantic cache for the first time, or adding a new cache profile. Walks the user through folder selection, extension config, manifest creation, and first distillation pass.
3-Phase Knowledge Search strategy for the RLM Factory ecosystem. Auto-invoked when tasks involve finding code, documentation, or architecture context in the repository. Enforces the optimal search order: RLM Summary Scan (O(1)) -> Vector DB Semantic Search -> Grep/Exact Match. Never skip phases.
Comprehensive UI/UX design plugin for mobile (iOS, Android, React Native) and web applications with design systems, accessibility, and modern patterns
Cloud architecture design for AWS/Azure/GCP, Kubernetes cluster configuration, Terraform infrastructure-as-code, hybrid cloud networking, and multi-cloud cost optimization
Application profiling, performance optimization, and observability for frontend and backend systems
Persistent memory system for Claude Code - seamlessly preserve context across sessions