From devboy
Analyzes Claude Code or agent logs to auto-configure layered-pipeline compression profiles for tools, models, and workflows. Use to reduce context size, tool costs after LLM switches or for heavy tools.
npx claudepluginhub meteora-pro/devboy-tools --plugin devboyThis skill uses the workspace's default tool permissions.
Adapt the layered-pipeline (Paper 2 / `crates/plugins/format-pipeline`) to **this** user. The pipeline has four profile axes — tokenizer, LLM, agent/session, data/endpoint — and a horizontal hint policy. Defaults are conservative; this skill mines the user's existing agent logs to pick a tuned profile that matches their actual tool and model mix.
Audits Claude Code or Codex setups to identify ghost tokens and context waste, runs parallel agent audits, generates optimization plans and dashboards, implements fixes. Use when context feels tight.
Analyzes Claude Code conversation logs for token usage, costs, cache hit rates, workflow patterns (skills, agents, hooks), and cost optimizations. Generates interactive HTML dashboard.
Optimizes Claude Code sessions for Max-plan token limits through response compression, tool output filtering, drift protection, and planning for broad tasks. Useful for quota management and efficiency.
Share bugs, ideas, or general feedback.
Adapt the layered-pipeline (Paper 2 / crates/plugins/format-pipeline) to this user. The pipeline has four profile axes — tokenizer, LLM, agent/session, data/endpoint — and a horizontal hint policy. Defaults are conservative; this skill mines the user's existing agent logs to pick a tuned profile that matches their actual tool and model mix.
mcp__gitlab__get_issues) is dominating the context.After running this skill the user has a ~/.config/devboy/pipeline_config.toml (or ~/.devboy/pipeline_config.toml for the MCP-server hot path) with:
profiles.llm.active pinned to their dominant model (if ≥80% share);profiles.agent.active pinned to one of default / file_search_heavy / marathon_refactor based on session length, read-share, and compaction count;profiles.data.variants extended with placeholder entries for every observed mcp__* endpoint, ready for them to set preferred_format;hints policy left at safe defaults: schema_explainer is off (confirmed 0 lift in the 2026-04-25 evaluation), inline_format_hint is on only for local Ollama models.Once telemetry is on ([telemetry] enabled = true), live FormatMetadata from the MCP path reports the split savings:
dedup_savings_pct — fraction of tokens reclaimed by L0 cross-turn hints;encoder_savings_pct — fraction reclaimed by L1/L2 encoders, computed only over the L0-miss share of responses;combined_savings_pct — multiplicative composition (dedup + (1 − dedup) × encoder);baseline — the fixed baseline against which the percentages are taken (json_pretty for typed-domain transforms, json_compact for the offline tune analyze path);tokenizer — the BPE family driving the count (o200k_base / cl100k_base / heuristic).Quote those four numbers (plus the baseline + tokenizer) — not a single "saved X%" — when reporting back to the user. Per Paper 2 §Savings Accounting, savings without a named baseline and tokenizer are not comparable across systems.
command -v devboy >/dev/null || { echo "install devboy first"; exit 1; }
ls ~/.claude/projects/ >/dev/null 2>&1 || \
echo "no Claude logs at ~/.claude/projects — pass --input-dir <PATH> instead"
devboy tune from-claude-logs --dry-run
The command:
~/.claude/projects/<project>/*.jsonl and parses every line.mcp__* endpoint hits, sessions, and /compact events.profiles.llm.active, profiles.agent.active, and the new data-profile variants — without touching disk.Read the summary aloud to the user before applying:
# events — total parsed (more than ~5 000 means a confident fit).# model distribution — verify the dominant model is what they intend to keep using.# top mcp endpoints — these are candidates for per-domain templates.If the user agrees, drop --dry-run:
devboy tune from-claude-logs
Output ends with # wrote → ~/.config/devboy/pipeline_config.toml. The file is human-readable TOML — encourage the user to commit a project-local copy if they want it under VCS.
devboy tune show | head -80
Look for these markers:
[profiles.tokenizer] has all three variants (anthropic_class, openai_o200k, ollama_bpe).[profiles.llm] has active = "<their_model>".[profiles.agent] has active = "<inferred_variant>".[hints.types.schema_explainer] has enabled = false.If the active LLM is not in the variants list, the active value will fall back to "auto" — explain to the user that they can hand-add their model:
[profiles.llm.variants."their-model-name"]
tokenizer = "anthropic_class" # or openai_o200k / ollama_bpe
prefer_explicit_keys = true
context_window = 100000
max_inline_nested = 128
For every mcp__* endpoint that landed in profiles.data.variants without a preferred_format, ask the user what shape that tool returns:
| User says | Set preferred_format to |
|---|---|
| "list of issues / PRs / records" | csv_from_md |
| "log lines or pipeline output" | pipeline_deep_mckp |
| "code diff" | mr_diff_fence |
| "single configuration object" | kv |
| "free text / prose" | leave unset |
Edit the TOML directly and rerun devboy tune show to confirm.
After the first session with the new config, compare token usage in devboy doctor (or the user's billing dashboard). If the LLM accuracy drops on a specific endpoint, the most likely cause is an over-aggressive preferred_format — revert that endpoint to no preference and retry.
profiles.llm.active = "claude-sonnet-4.6" if the user's actual dominant model is something else. The tokenizer profile drives encoder choice; mismatching it will produce token estimates that are wrong by ~2× on Anthropic-class tokenizers.schema_explainer. It was confirmed to add 0 percentage points of accuracy lift in the 2026-04-25 evaluation. If a user asks for it, point them at §"Encoder Bug Postmortem" in paper-2-mckp-format-adaptive.md.from-claude-logs against a directory containing other tools' logs without --project. The aggregator does not anonymise across project boundaries; mixing projects produces a noisy fit.| Symptom | Cause | Fix |
|---|---|---|
claude logs directory not found | ~/.claude/projects doesn't exist on this machine | Pass --input-dir <PATH> to wherever the user keeps their agent logs. |
no jsonl events parsed — check the path | Path exists but contains no .jsonl, or the format is not a Claude Code log | Verify with `ls -R |
profiles.llm.active = "auto" after the run | Dominant model didn't reach 80% share, or it isn't in the built-in variants | Either accept the auto-resolution, or hand-add the model to profiles.llm.variants (see step 3). |
Wrong profiles.agent.active | The classifier saw an atypical session window | Override manually: edit pipeline_config.toml and set profiles.agent.active = "default" (or whatever fits). The next tune run respects the explicit value. |
docs/research/paper-2-mckp-format-adaptive.md §"Configuration Extensibility".crates/plugins/format-pipeline/src/adaptive_config.rs.crates/plugins/format-pipeline/src/bin/tune.rs; subcommand from-claude-logs.