From tokeneconomics
Analyze Claude Code session token usage to flag waste and optimization opportunities. Use when the user asks to "analyze token usage", "check token efficiency", "audit token spend", "tokeneconomics", "reduce token costs", "optimize token usage", "check my burn rate", or mentions token waste, session costs, usage limits, cache efficiency, or conversation sprawl.
npx claudepluginhub florianbuetow/claude-code --plugin tokeneconomicsThis skill uses the workspace's default tool permissions.
Analyze Claude Code session logs to measure token efficiency across six
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Analyze Claude Code session logs to measure token efficiency across six dimensions: cost, cache efficiency, conversation sprawl, model selection, output efficiency, and session patterns. Produces a scored report with risks, opportunities, and actionable recommendations.
From the user's request, determine:
Also determine the time window:
Run the analysis script with the determined scope:
Project scope:
python3 "${CLAUDE_PLUGIN_ROOT}/scripts/tokeneconomics.py" --days <N>
Global scope:
python3 "${CLAUDE_PLUGIN_ROOT}/scripts/tokeneconomics.py" --all --days <N>
Display the full report output inline. Do NOT hide it behind a file path.
After the report, highlight:
If the user wants to dig into a specific dimension or session, offer to:
references/waste-taxonomy.mdreferences/benchmarks.md--all to see all projects