rlm-go

A Go implementation of Recursive Language Models (RLM) - an inference-time scaling strategy that enables LLMs to handle arbitrarily long contexts by treating prompts as external objects that can be programmatically examined and recursively processed.
Overview
RLM-go provides a Go REPL environment where LLM-generated code can:
- Access context stored as a variable
- Make recursive sub-LLM calls via
Query() and QueryBatched()
- Use standard Go operations for text processing
- Signal completion with
FINAL() or FINAL_VAR()
Key Design Decisions
Unlike the Python RLM which uses socket IPC, rlm-go uses direct function injection via Yaegi - a Go interpreter. This eliminates:
- Socket server overhead
- Serialization/deserialization
- Process boundaries
The result is ~100x less latency per sub-LLM call compared to socket IPC.
Requirements
- Go 1.23 or later (for building from source)
- An LLM API key:
ANTHROPIC_API_KEY for Claude models (default)
GEMINI_API_KEY for Gemini models
OPENAI_API_KEY for OpenAI models
- (Optional) Podman or Docker for isolated sandbox execution
Supported Models
| Provider | Models | Env Variable |
|---|
| Anthropic | claude-sonnet-4-20250514, claude-opus-4-20250514, etc. | ANTHROPIC_API_KEY |
| Google | gemini-3-flash-preview, gemini-3-pro-preview | GEMINI_API_KEY |
| OpenAI | gpt-5, gpt-5-mini | OPENAI_API_KEY |
The provider is auto-detected based on model name. Anthropic is the default.
Installation
Quick Install (Recommended)
# Download and install the latest release
curl -fsSL https://raw.githubusercontent.com/XiaoConstantine/rlm-go/main/install.sh | bash
This installs the rlm binary to ~/.local/bin/rlm.
Go Install
go install github.com/XiaoConstantine/rlm-go/cmd/rlm@latest
From Source
git clone https://github.com/XiaoConstantine/rlm-go.git
cd rlm-go
go build -o rlm ./cmd/rlm
As a Library
go get github.com/XiaoConstantine/rlm-go
Claude Code Integration
RLM includes a skill for Claude Code that provides documentation and usage guidance for large context processing.
Install the Skill
rlm install-claude-code
This creates a skill at ~/.claude/skills/rlm/SKILL.md that teaches Claude Code:
- When to use RLM (contexts >50KB, token efficiency needed)
- Command usage and options
- The Query() and FINAL() patterns
- Token efficiency benefits (40% savings on large contexts)
After installation, restart Claude Code to activate the skill.
CLI Usage
# Basic usage with Anthropic (default)
rlm -context file.txt -query "Summarize the key points"
# Use Gemini
rlm -model gemini-3-flash-preview -context file.txt -query "Analyze this data"
# Use OpenAI
rlm -model gpt-5-mini -context file.txt -query "Summarize this"
# Verbose output with iteration details
rlm -context logs.json -query "Find all errors" -verbose
# JSON output for programmatic use
rlm -context data.csv -query "Extract anomalies" -json
# Pipe context from stdin
cat largefile.txt | rlm -query "What patterns do you see?"
CLI Options
| Flag | Description | Default |
|---|
-context | Path to context file | - |
-context-string | Context string directly | - |
-query | Query to run against context | Required |
-model | LLM model to use | claude-sonnet-4-20250514 |
-max-iterations | Maximum iterations | 30 |
-verbose | Enable verbose output | false |
-json | Output result as JSON | false |
-log-dir | Directory for JSONL logs | - |
Quick Start (Library)
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/XiaoConstantine/rlm-go/pkg/rlm"
)
func main() {
// Create your LLM client (implements rlm.LLMClient and repl.LLMClient)
client := NewAnthropicClient(os.Getenv("ANTHROPIC_API_KEY"), "claude-sonnet-4-20250514")
// Create RLM instance
r := rlm.New(client, client,
rlm.WithMaxIterations(10),
rlm.WithVerbose(true),
)
// Run completion with long context
result, err := r.Complete(context.Background(), longDocument, "What are the key findings?")
if err != nil {
log.Fatal(err)
}