Recursive Language Model for processing large contexts (>50KB). Use for complex analysis tasks where token efficiency matters. Achieves 40% token savings by letting the LLM programmatically explore context via Query() and FINAL() patterns.
Processes large contexts (>50KB) efficiently by programmatically exploring data through recursive sub-queries, achieving 40% token savings. Use when analyzing extensive logs, codebases, or documents that require iterative exploration rather than single-pass processing.
/plugin marketplace add XiaoConstantine/rlm-go/plugin install rlm@rlmThis skill is limited to using the following tools:
RLM is an inference-time scaling strategy that enables LLMs to handle arbitrarily long contexts by treating prompts as external objects that can be programmatically examined and recursively processed.
Use rlm instead of direct LLM calls when:
# Basic usage with context file
~/.local/bin/rlm -context <file> -query "<query>" -verbose
# With inline context
~/.local/bin/rlm -context-string "data" -query "<query>"
# Pipe context from stdin
cat largefile.txt | ~/.local/bin/rlm -query "<query>"
# JSON output for programmatic use
~/.local/bin/rlm -context <file> -query "<query>" -json
| Flag | Description | Default |
|---|---|---|
-context | Path to context file | - |
-context-string | Context string directly | - |
-query | Query to run against context | Required |
-model | LLM model to use | claude-sonnet-4-20250514 |
-max-iterations | Maximum iterations | 30 |
-verbose | Enable verbose output | false |
-json | Output result as JSON | false |
-log-dir | Directory for JSONL logs | - |
RLM uses a Go REPL environment where LLM-generated code can:
Query() for focused analysisFINAL() when done// LLM generates code like this inside the REPL:
chunk := context[0:10000]
summary := Query("Summarize the key findings in this text: " + chunk)
// ... iterate through more chunks
FINAL(combinedResult)
The LLM signals completion by calling:
FINAL("answer") - Return a string answerFINAL_VAR(variableName) - Return value of a variableFor large contexts (>50KB), RLM typically achieves 40% token savings by:
rlm -context server.log -query "Find all unique error patterns and their frequencies"
rlm -context data.json -query "Extract all user IDs with failed transactions" -verbose
cat src/*.go | rlm -query "Identify all exported functions and their purposes"
ANTHROPIC_API_KEY environment variable must be set~/.local/bin/rlm# Quick install
curl -fsSL https://raw.githubusercontent.com/XiaoConstantine/rlm-go/main/install.sh | bash
# Or with Go
go install github.com/XiaoConstantine/rlm-go/cmd/rlm@latest
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.