From rlm
Processes large contexts (>50KB) with recursive LLM sub-queries via Query() and FINAL() in Go REPL, achieving 40% token savings for complex analysis tasks.
npx claudepluginhub xiaoconstantine/rlm-go --plugin rlmThis skill is limited to using the following tools:
**RLM** is an inference-time scaling strategy that enables LLMs to handle arbitrarily long contexts by treating prompts as external objects that can be programmatically examined and recursively processed.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
RLM is an inference-time scaling strategy that enables LLMs to handle arbitrarily long contexts by treating prompts as external objects that can be programmatically examined and recursively processed.
Use rlm instead of direct LLM calls when:
# Basic usage with context file
~/.local/bin/rlm -context <file> -query "<query>" -verbose
# With inline context
~/.local/bin/rlm -context-string "data" -query "<query>"
# Pipe context from stdin
cat largefile.txt | ~/.local/bin/rlm -query "<query>"
# JSON output for programmatic use
~/.local/bin/rlm -context <file> -query "<query>" -json
| Flag | Description | Default |
|---|---|---|
-context | Path to context file | - |
-context-string | Context string directly | - |
-query | Query to run against context | Required |
-model | LLM model to use | claude-sonnet-4-20250514 |
-max-iterations | Maximum iterations | 30 |
-verbose | Enable verbose output | false |
-json | Output result as JSON | false |
-log-dir | Directory for JSONL logs | - |
RLM uses a Go REPL environment where LLM-generated code can:
Query() for focused analysisFINAL() when done// LLM generates code like this inside the REPL:
chunk := context[0:10000]
summary := Query("Summarize the key findings in this text: " + chunk)
// ... iterate through more chunks
FINAL(combinedResult)
The LLM signals completion by calling:
FINAL("answer") - Return a string answerFINAL_VAR(variableName) - Return value of a variableFor large contexts (>50KB), RLM typically achieves 40% token savings by:
rlm -context server.log -query "Find all unique error patterns and their frequencies"
rlm -context data.json -query "Extract all user IDs with failed transactions" -verbose
cat src/*.go | rlm -query "Identify all exported functions and their purposes"
ANTHROPIC_API_KEY environment variable must be set~/.local/bin/rlm# Quick install
curl -fsSL https://raw.githubusercontent.com/XiaoConstantine/rlm-go/main/install.sh | bash
# Or with Go
go install github.com/XiaoConstantine/rlm-go/cmd/rlm@latest