From claude-memory
Analyzes Claude Code conversation logs for token usage, costs, cache hit rates, workflow patterns (skills, agents, hooks), and cost optimizations. Generates interactive HTML dashboard.
npx claudepluginhub gupsammy/claudest --plugin claude-memoryThis skill is limited to using the following tools:
Parse JSONL conversation files from `~/.claude/projects/*/` into per-turn analytics tables, then analyze both cost-optimization opportunities and Claude Code workflow patterns (skills, agents, hooks).
Runs local Python CLI dashboard analyzing Claude Code JSONL transcripts for per-prompt token costs, heatmaps, cache analytics, project comparisons, and optimization tips.
Analyzes Claude Code usage, costs, tokens, efficiency, cache hits, and burn rate using ccusage for stats and termgraph for visuals on work or personal profiles.
Tracks AI token consumption across providers, detects waste, estimates theoretical costs, and suggests optimizations. Useful for monitoring usage, quotas, and efficiency in AI interactions.
Share bugs, ideas, or general feedback.
Parse JSONL conversation files from ~/.claude/projects/*/ into per-turn analytics tables, then analyze both cost-optimization opportunities and Claude Code workflow patterns (skills, agents, hooks).
Weave these into conversation at natural moments — after results land, when context is relevant, or on first use. One or two per run, not all at once.
python3 ${CLAUDE_PLUGIN_ROOT}/skills/get-token-insights/scripts/ingest_token_data.py
First run processes all files (~100s for ~2500 files) — warn the user about the wait before running. Incremental runs complete in under 5s. The script populates analytics tables, deploys an interactive dashboard to ~/.claude-memory/dashboard.html (built from templates/dashboard.html), and prints a slim JSON blob to stdout (full data goes to dashboard only).
If the script exits non-zero, report the error and stop.
After parsing the JSON stdout from Step 1, construct a personalized prompt for a claude-code-guide agent using the actual data — not generic descriptions. For each of the top 3 insights (by waste_usd), include verbatim: the finding text, root_cause text, waste_usd value, solution.action, and solution.detail. Also include the specific project names, counts, and numbers mentioned in the insight (e.g. "meta-ads-cli: 75 cliffs across 53 sessions") so the agent's response is grounded in the user's real usage patterns.
Spawn the agent with subagent_type: "claude-code-guide" in foreground (do not use run_in_background). Wait for the agent to return before proceeding to Step 2. Weave its suggestions into the analysis in Step 2.
Capture the JSON stdout from Step 1 as the analysis input. Structure the analysis in two parts:
State the total spend, session count, date range, and average cost per session in one paragraph.
For each insight from the insights array (sorted by waste_usd):
Compare cost across models. If one model dominates spend, call it out and estimate savings from switching routine tasks to a cheaper model.
List top 3 projects by dollar spend. For the most expensive project, identify what drives the cost.
Summarize which skills are invoked most, error rates per skill, and any skills that appear underused relative to the user's workflow.
Show which subagent types are spawned, how often, and whether model overrides are being used. Flag if subagent_type is frequently omitted (defaults to general-purpose when Explore would suffice).
Identify the slowest hooks by total runtime and average latency. Flag any hooks with high error rates.
If the trends object in the JSON output is non-empty, present a week-on-week comparison:
State the current and prior window session counts and total cost.
For each item in trends.improved, state the metric and its percentage change. Explain why it likely improved if you can infer from context (e.g., hook fix, retired skill, CLAUDE.md rule).
For each item in trends.regressed, flag it and suggest what might have caused it.
List any new or retired skills and hooks. For new items, note whether they appear intentional. For retired items, confirm they are no longer needed.
Highlight the hooks with the biggest latency changes (from trends.hook_trends). For hooks that improved significantly, credit the fix. For hooks that got slower, flag for investigation.
If trends is empty or has no current_window, skip Part C and note that not enough historical data exists for comparison yet.
Present the full analysis as markdown with the sections above. Ask the user if they want to dive deeper into any specific project, skill, or insight.
open ~/.claude-memory/dashboard.html
Note the dashboard is available for deeper exploration — Section 6 (Claude Code Ecosystem) has the new skill, agent, and hook charts.