Marketplace for the RLM CLI plugin
npx claudepluginhub rawwerks/rlm-cliRecursive Language Models (RLM) CLI - enables LLMs to recursively process large contexts by decomposing inputs and calling themselves over parts
Share bugs, ideas, or general feedback.
CLI wrapper for rlm with directory-as-context, JSON-first output, and self-documenting commands.
Upstream RLM: https://github.com/alexzhang13/rlm
curl -sSL https://raw.githubusercontent.com/rawwerks/rlm-cli/master/install.sh | bash
This clones the repo to ~/.local/share/rlm-cli and symlinks rlm to ~/.local/bin/.
To uninstall:
curl -sSL https://raw.githubusercontent.com/rawwerks/rlm-cli/master/uninstall.sh | bash
Run directly without installing:
uvx --from git+https://github.com/rawwerks/rlm-cli.git rlm --help
pipx install git+https://github.com/rawwerks/rlm-cli.git
git clone --recurse-submodules https://github.com/rawwerks/rlm-cli.git
cd rlm-cli
uv venv
uv pip install -e .
This repo includes a Claude Code plugin with an rlm skill. The skill teaches Claude how to use the rlm CLI for code analysis, diff reviews, and codebase exploration.
Claude Code (Interactive)
/plugin marketplace add rawwerks/rlm-cli
/plugin install rlm@rlm-cli
Claude CLI
claude plugin marketplace add rawwerks/rlm-cli
claude plugin install rlm@rlm-cli
The /rlm skill gives Claude knowledge of:
ask, complete, search, index, doctor)Once installed, Claude can use rlm to analyze code, review diffs, and explore codebases when you ask it to.
Authentication depends on the backend you choose:
openrouter: OPENROUTER_API_KEYopenai: OPENAI_API_KEYanthropic: ANTHROPIC_API_KEYExport the appropriate key in your shell environment, for example:
export OPENROUTER_API_KEY=sk-or-...
rlm ask . -q "Summarize this repo" --json
rlm ask https://www.anthropic.com/constitution -q "Summarize this page" --json
Same with uvx and OpenRouter:
uvx --from git+https://github.com/rawwerks/rlm-cli.git rlm ask https://www.anthropic.com/constitution -q "Summarize Claude's constitution" --backend openrouter --model google/gemini-3-flash-preview --json
rlm ask src/rlm_cli/cli.py -q "Explain the CLI flow" --json
git diff | rlm ask - -q "Review this diff" --json
rlm complete "Write a commit message" --json
rlm complete "Say hello" --backend openrouter --model z-ai/glm-4.7:turbo --json
--json outputs JSON only on stdout.--output-format text|json sets output format.--backend, --model, --environment control the RLM backend.--max-iterations N sets max REPL iterations (default: 30).--max-depth N enables recursive RLM calls (default: 1, no recursion).--max-budget N.NN limits spending in USD (requires cost-tracking backend like OpenRouter).--backend-arg/--env-arg/--rlm-arg KEY=VALUE pass extra kwargs.--backend-json/--env-json/--rlm-json @file.json merge JSON kwargs.--literal treats inputs as literal text; --path forces filesystem paths.--markitdown/--no-markitdown toggles URL and non-text conversion to Markdown.--verbose or --debug enables verbose backend logging.--inject-file FILE executes Python code between iterations (update variables mid-run).Pressing Ctrl+C during execution returns the best partial answer as success (exit code 0) instead of raising an error. This is useful when you want to stop waiting but keep what the LLM has produced so far.
rlm ask . -q "Analyze in detail" --max-iterations 20
# Press Ctrl+C after a few iterations
# Output: partial answer with exit_code=0, early_exit=true
In JSON mode, the result includes early_exit and early_exit_reason fields:
{"ok": true, "result": {"response": "...", "early_exit": true, "early_exit_reason": "user_cancelled"}}
Send SIGUSR1 to request graceful early exit without using Ctrl+C:
# In another terminal
kill -SIGUSR1 <rlm_pid>
This is useful for programmatic control over long-running RLM tasks.
The --inject-file option executes Python code between iterations, allowing you to update REPL variables while the RLM is running.
# Create inject file
echo 'focus = "authentication"' > inject.py
# Start RLM with inject file
rlm ask . -q "Analyze based on the 'focus' variable" --inject-file inject.py