Token-efficient code analysis - understand codebases without burning tokens
Analyzes codebases efficiently with 95% token savings using multi-layer AST, call graph, and data flow analysis.
/plugin marketplace add sethdford/claude-toolkit/plugin install tldr@claude-toolkit<subcommand> [options]Analyze code efficiently with 95% token savings.
| Command | Description |
|---|---|
tldr tree [path] | File tree structure |
tldr structure [path] --lang <lang> | Code structure (functions, classes) |
tldr search <pattern> [path] | Search code |
tldr context <entry> --project <path> | LLM-ready context for a function |
tldr cfg <file> <function> | Control flow graph |
tldr dfg <file> <function> | Data flow graph |
tldr slice <file> <function> <line> | Program slice (what affects a line) |
tldr impact <function> [path] | Reverse call graph (who calls this) |
tldr dead [path] | Find unreachable code |
tldr arch [path] | Detect architectural layers |
Layer 1: AST → ~500 tokens (signatures, imports, classes)
Layer 2: Call Graph → +440 tokens (cross-file relationships)
Layer 3: CFG → +110 tokens (control flow, complexity)
Layer 4: DFG → +130 tokens (data flow, variables)
Layer 5: PDG → +150 tokens (program slicing)
────────────────────────────────────────────────────────
Total: ~1,200 tokens vs 23,000 raw = 95% savings
# First, check if tldr is installed
which tldr || echo "Install with: pip install tldr-code"
# Explore a codebase
tldr tree src/
tldr structure src/ --lang python
# Get context for a function
tldr context main --project src/ --depth 2
# Before refactoring - check impact
tldr impact process_data src/
The tldr CLI must be installed separately:
pip install tldr-code
# Or via uv:
uv pip install tldr-code
context for LLM-ready summariesslice to find what affects a buggy lineimpact before changing functionsarch to detect layer violations