From devboy
Generates graphic weekly/monthly digests of Claude Code session patterns: biome aquarium (whale/shark/dolphin/fish/shrimp/plankton), 8 archetypes, rhythms, stack palettes, DORA radar (CFR/lead time/pushes), friction (compacts/pivots/subagents). Auto-installs Python backend.
npx claudepluginhub meteora-pro/devboy-tools --plugin devboyThis skill uses the workspace's default tool permissions.
This is a **thin baseline skill** that delegates the heavy lifting to a
Generates usage analytics dashboard from Claude Code sessions with epistemic protocol coverage metrics, friction analysis, growth timelines, and improvement recommendations.
Analyzes Claude Code session patterns, categories, trends, benchmarks, and usage for behavioral insights and recommendations. Activates on 'patterns', 'insights', 'how am I doing' queries.
Analyzes local Claude Code session logs to evaluate prompt quality, tool usage, efficiency metrics, productivity patterns, error recovery, and context switching. Ideal for optimizing AI-assisted coding workflows.
Share bugs, ideas, or general feedback.
This is a thin baseline skill that delegates the heavy lifting to a
sibling Python backend (bin/analyze-usage and lib//scripts/)
auto-installed on first use into ~/.claude/skills/analyze-usage/. The
backend reads ~/.claude/projects/*.jsonl directly — there is nothing
to push, post or upload.
The output is a graphic period digest with metaphors:
The Python backend is not embedded in the devboy binary (it would
bloat the release). Check whether it exists; if not, fetch it.
SKILL_DIR="$HOME/.claude/skills/analyze-usage"
if [ ! -x "$SKILL_DIR/bin/analyze-usage" ]; then
echo "Installing analyze-usage backend (~1MB sparse checkout)..."
curl -sSL https://raw.githubusercontent.com/meteora-pro/devboy-tools/main/.claude/skills/analyze-usage/scripts/install.sh | bash
fi
The installer does a git sparse-checkout of .claude/skills/analyze-usage/
only — no full repo clone, no Cargo build. Requires uv for running
Python (https://docs.astral.sh/uv/).
The installer is idempotent: re-running it just refreshes to the latest
main (or pin via REF=v0.22.0 curl ... | bash).
result=$(devboy trace begin --skill analyze-usage)
SESSION_DIR=$(echo "$result" | jq -r .session_dir)
SESSION_ID=$(echo "$result" | jq -r .session_id)
This skill is traceable — retro will see when it ran and how
long it took.
Parse the user's intent into ISO dates and a granularity:
| User says | --from | --to | --period |
|---|---|---|---|
| "за прошлую неделю" / "last week" | Monday of previous ISO week | Sunday of previous ISO week | weekly |
| "за этот месяц" / "this month" | 1st of current month | today | monthly |
| "за апрель" / "April" | first day of April | last day of April | monthly |
| "за квартал" / "Q1" / "за 3 месяца" | start month | end month | both |
| explicit dates given | use them verbatim | as requested |
If the user did not specify a format, default to text for terminal
output. Prefer html with --open when the user explicitly asks for a
"graphic" / "красивый" / "в браузере" report.
"$SKILL_DIR/bin/analyze-usage" period \
--from "$FROM" --to "$TO" --period "$PERIOD" \
--format "$FORMAT" \
${OUT:+--out "$OUT"} \
${OUT_DIR:+--out-dir "$OUT_DIR"} \
${OPEN:+--open}
Available --format values:
| Format | Output destination |
|---|---|
text | stdout (default) |
markdown | use with --out FILE.md for Slack/GitHub |
html | use with --out FILE.html (or auto /tmp/) |
all | writes .txt+.md+.html into --out-dir |
For long quarters wallclock-heavy reports, prefer --out-dir /tmp/<label>
and --format all so the user has all three artefacts at once.
If the user names a session ID prefix (e.g. "посмотри сессию 2c052d83"):
"$SKILL_DIR/bin/analyze-usage" session 2c052d83
Output: biome / archetype / rhythm / first 5 prompts / bash subcategory mix
Only if the user wants raw data for ad-hoc queries:
"$SKILL_DIR/bin/analyze-usage" pipeline --since 2026-04-01
This runs all 10 extractors sequentially (~4-5 minutes on a full corpus) and writes:
outputs/raw/*.parquet — original UUIDs/paths/branches (owner-only)outputs/anon/*.parquet — anonymized version, shareableoutputs/llm/*.parquet — Tier 2 LLM-augmented (filled by the agent
in step 7)After regenerating, always audit anon before sharing:
"$SKILL_DIR/bin/analyze-usage" audit
# exits 1 if any raw UUID / abs path / bash command / branch / mcp slug
# leaks into anon parquet
Tier 1 (extractors) is pure stat — deterministic, anonymizable. Tier 2 narratives require the agent itself to read the raw jsonl and write back a summary. The skill never calls an external LLM.
To enqueue Whales/Sharks/Dolphins for narrative:
"$SKILL_DIR/bin/analyze-usage" llm-queue
# writes outputs/llm/_queue.jsonl with one row per session
Then iterate the queue, read each session's jsonl, summarize (≤200
words: name, phases, what was created, pivots, finale), and append rows
to outputs/llm/session_names.parquet. The full schema is documented
in extract_llm_session_names.py.
devboy trace end \
--session-dir "$SESSION_DIR" --session-id "$SESSION_ID" \
--skill analyze-usage \
--outcome "$OUTCOME" \
--summary "<period>: <N> sessions, <M> +LOC, CFR <X>"
+LOC = sum across sessions, True CFR = prod_fix / feat, biome counts add up to total session count.audit exits 0 after every pipeline regeneration before any sharing.outputs/llm/session_names.parquet.outputs/raw/ — it contains real UUIDs, file paths,
branch names, prompt tokens. Only outputs/anon/ is shareable, and
only after audit passes.extract_*.py, that's a Tier 2 feature — put it in
extract_llm_*.py instead.~/.claude/projects/ — that's the source data,
read-only.uv run-based. If uv is missing, the installer warns
but still copies the files; the user must install uv separately
(https://docs.astral.sh/uv/).notify (category 5) if you need delivery.retro.A full glossary (biome thresholds, archetype rules, rhythm classifier,
stack heuristics, bash subcategory regexes, DORA proxy formulas,
scaling-law findings) lives next to the backend at
~/.claude/skills/analyze-usage/GLOSSARY.md after install.
The architecture (extractor list, library API, anonymization contract,
extension guide for new metrics) is in
~/.claude/skills/analyze-usage/SKILL.md.
# Quick weekly digest:
analyze-usage period --from 2026-04-23 --to 2026-04-30 --period weekly
# Quarter, all formats, open HTML in browser:
analyze-usage period --from 2026-02-01 --to 2026-04-30 --period both \
--format all --out-dir /tmp/q1_2026 --open
# Drill into one Whale:
analyze-usage session 2c052d83
# Generate parquet bundles + audit:
analyze-usage pipeline --since 2026-04-01
analyze-usage audit