Token optimization best practices for cost-effective Claude Code usage. Automatically applies efficient file reading, command execution, and output handling strategies. Includes model selection guidance (Opus for learning, Sonnet for development/debugging). Prefers bash commands over reading files.
npx claudepluginhub joshuarweaver/cascade-ai-ml-engineering --plugin delphine-l-claude-globalThis skill is limited to using the following tools:
This skill provides token optimization strategies for cost-effective Claude Code usage across all projects. These guidelines help minimize token consumption while maintaining high-quality assistance.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
This skill provides token optimization strategies for cost-effective Claude Code usage across all projects. These guidelines help minimize token consumption while maintaining high-quality assistance.
ALWAYS follow these optimization guidelines by default unless the user explicitly requests verbose output or full file contents.
Default assumption: Users prefer efficient, cost-effective assistance.
Use the right model for the task to optimize cost and performance:
Use Opus when:
Use Sonnet (default) for:
Typical session pattern:
Savings: ~50% token cost vs all-Opus usage.
Myth: Having many skills in .claude/skills/ increases token usage.
Reality: Skills use progressive disclosure - Claude sees only skill descriptions at session start (~155 tokens for 4 skills). Full skill content loaded only when activated.
It's safe to symlink multiple skills to a project. Token waste comes from reading large files unnecessarily, not from having skills available.
Use --quiet, -q, --silent flags by default. Only use verbose when user explicitly asks.
Always filter before reading: tail -100, grep -i "error", specific time ranges.
Check git status --short, package.json, requirements.txt before reading large files.
Search for specific content with Grep tool instead of reading entire files.
Use offset and limit parameters. Check file size with wc -l first.
CRITICAL OPTIMIZATION. Reading files costs tokens. Bash commands don't.
| Operation | Wasteful | Efficient |
|---|---|---|
| Copy file | Read + Write | cp source dest |
| Replace text | Read + Edit | sed -i '' 's/old/new/g' file |
| Append | Read + Write | echo "text" >> file |
| Delete lines | Read + Write | sed -i '' '/pattern/d' file |
| Merge files | Read + Read + Write | cat file1 file2 > combined |
| Count lines | Read file | wc -l file |
| Check content | Read file | grep -q "term" file |
When to break this rule: Complex logic, code-aware changes, validation needed, interactive review. For details, see strategies.md.
Limit scope: head -50, find . -maxdepth 2, tree -L 2.
Provide structured summaries of directory contents, code structure, command output.
head -100, tail -50, sample from middle with head -500 | tail -100.
Extract specific fields: jq '.metadata', jq 'keys'. For CSV: head -20, wc -l.
Get overview first (find, grep for classes/functions), read structure only, search for specific code, read only relevant sections.
Use Task/Explore subagent for broad codebase exploration. Saves 70-80% tokens vs direct multi-file exploration.
Batch 3-5 related searches in parallel. Save results immediately. Document "not found" items.
For detailed strategies, bash patterns, and extensive examples, see strategies.md.
Ask yourself:
Override efficiency rules when:
In learning mode:
In cases 1-3, explain token cost to user and offer filtered view first.
Model Selection (First Priority):
Before ANY file operation, ask yourself:
| Approach | Tokens/Week | Notes |
|---|---|---|
| Wasteful (Read/Edit/Write everything) | 500K | Reading files unnecessarily |
| Moderate (filtered reads only) | 200K | Grep/head/tail usage |
| Efficient (bash commands + filters) | 30-50K | Using cp/sed/awk instead of Read |
Applying these rules reduces costs by 90-95% on average.
This skill automatically applies these optimizations when:
You can always override by saying:
| File | Content | When to load |
|---|---|---|
| strategies.md | Detailed bash command strategies, file operation patterns, sed/awk examples, Jupyter notebook manipulation, safe glob patterns, macOS/Linux compatibility | When implementing specific file operations or need detailed bash patterns |
| learning-mode.md | Strategic file selection, targeted pattern learning workflows, broad repository exploration strategies, repository type identification | When entering learning mode or exploring a new codebase |
| examples.md | Extensive token savings examples with before/after comparisons, targeted learning examples (Galaxy wrappers, API patterns), cost calculations | When demonstrating token savings or learning from examples |
| project-patterns.md | Analysis file organization, task management with TodoWrite, background process management, repository organization, MANIFEST system, efficient file operations | When organizing projects, managing long-running tasks, or setting up navigation patterns |
Core motto: Right model. Right tool. Filter first. Read selectively. Summarize intelligently.
Model selection (highest impact):
Tool selection (primary optimization):
Secondary rules:
By following these guidelines, users can get 5-10x more value from their Claude subscription while maintaining high-quality assistance.