From aleph
Loads large local files and data into external memory for searching, recursive reasoning loops, Python execution, and session persistence. Useful for analyzing big docs, repos, or logs via /aleph.
npx claudepluginhub hmbown/aleph --plugin alephThis skill uses the workspace's default tool permissions.
TL;DR: Load large data into external memory, search it, reason in loops, and
Provides stable principles, keyword registry, and navigation for Claude Code memory system (CLAUDE.md, static memory, hierarchy, imports). Delegates details to docs-management skill.
Based on the Recursive Language Models (RLM) research by Zhang, Kraska, and Khattab (2025), this skill provides strategies for handling tasks that exceed comfortable context limits through programmatic decomposition and recursive self-invocation. Triggers on phrases like "analyze all files", "process this large document", "aggregate information from", "search across the codebase", or tasks involving 10+ files or 50k+ tokens.
Share bugs, ideas, or general feedback.
TL;DR: Load large data into external memory, search it, reason in loops, and persist across sessions.
This plugin bundles the Aleph MCP launcher and the Aleph skill. It assumes the
aleph executable is already installed and available on PATH.
# Test if Aleph is available
list_contexts()
If that works, the MCP server is running.
Instant pattern:
load_context(content="<paste huge content here>", context_id="doc")
search_context(pattern="keyword", context_id="doc")
finalize(answer="Found X at line Y", context_id="doc")
Note: tool names may appear as mcp__aleph__load_context in your MCP client.
The most common pattern: point at a file, let Aleph load it, and immediately apply RLM reasoning.
load_file(path="path/to/large_file.md", context_id="doc")
search_context(pattern="relevant", context_id="doc")
exec_python(code="""
chunks = chunk(50000)
summaries = sub_query_batch("Summarize:", chunks)
print(summaries)
""", context_id="doc")
finalize(answer="...", context_id="doc")
When a user says /aleph myfile.py or $aleph myfile.py, load it and
immediately begin this pattern.
Users can request a specific recursion depth: /aleph N file where N
controls strategy.
| Invocation | Depth | Strategy |
|---|---|---|
/aleph file.py | 1 | Direct analysis: search, peek, exec_python |
/aleph 2 file.py | 2 | Parallel fan-out with sub_query_batch or sub_query_map |
/aleph 3 file.py | 3 | Recursive sub_aleph usage |
/aleph 4 file.py | 4 | Deep recursion with longer timeouts |
For depth 3+, bump timeouts:
configure(sub_query_timeout=300, sandbox_timeout=300)
Dynamic escalation rule:
output="json" for structured results and output="markdown" for
human-readable output.chunk_context() with peek_context() to navigate
quickly.rg_search() for repo search and semantic_search() for meaning-based
lookup.load_file() handles PDFs, Word docs, HTML, and compressed logs.save_session() and load_session()..aleph/ is a safe
default.load_context(content=data_text, context_id="doc")
search_context(pattern="important|keyword|pattern", context_id="doc")
peek_context(start=100, end=150, unit="lines", context_id="doc")
finalize(answer="Analysis complete: ...", confidence="high", context_id="doc")
load_context(content=doc1, context_id="v1")
load_context(content=doc2, context_id="v2")
diff_contexts(a="v1", b="v2")
finalize(answer="Key differences: ...", context_id="v1")
load_context(content=problem, context_id="analysis")
think(question="What is the core issue?", context_id="analysis")
search_context(pattern="relevant", context_id="analysis")
evaluate_progress(
current_understanding="I found X...",
remaining_questions=["What about Y?"],
confidence_score=0.7,
context_id="analysis"
)
finalize(answer="Conclusion: ...", confidence="high", context_id="analysis")
rg_search(pattern="TODO|FIXME", paths=["."], load_context_id="rg_hits", confirm=true)
search_context(pattern="TODO", context_id="rg_hits")
semantic_search(query="login failure", context_id="doc", top_k=3)
peek_context(start=1200, end=1600, unit="chars", context_id="doc")
exec_python(code="""
chunks = chunk(200000)
summaries = sub_query_batch("Summarize this section:", chunks)
final = sub_query("Synthesize into five bullets:\\n" + "\\n".join(summaries))
print(final)
""", context_id="doc")
When sub-agents need access to the parent's loaded contexts:
configure(sub_query_share_session=true)
exec_python(code="""
result = sub_query("Search for TODOs in the parent context and summarize them")
print(result)
""", context_id="doc")
This is opt-in. Default is false.
Use recipes when you want reusable, inspectable workflows.
run_recipe(
recipe={
"version": "aleph.recipe.v1",
"context_id": "doc",
"budget": {"max_steps": 8, "max_sub_queries": 5},
"steps": [
{"op": "search", "pattern": "ERROR|WARN", "max_results": 10},
{"op": "map_sub_query", "prompt": "Root cause?", "context_field": "context"},
{"op": "aggregate", "prompt": "Synthesize causes"},
{"op": "finalize"}
]
}
)
run_recipe_code(
context_id="doc",
code="""
recipe = (
Recipe(context_id='doc', max_sub_queries=5)
.search('ERROR|WARN', max_results=10)
.map_sub_query('Root cause?', context_field='context')
.aggregate('Synthesize causes')
.finalize()
)
"""
)
The safest default pattern is:
Prefer server-side computation over pulling raw context back through the prompt.
Do not treat get_variable("ctx") as the default path.
https://github.com/Hmbown/alephREADME.md, MCP_SETUP.md, docs/CONFIGURATION.md