This skill should be used when the user asks to "optimize context", "reduce token costs", "improve context efficiency", "implement KV-cache optimization", "partition context", or mentions context limits, observation masking, context budgeting, or extending effective context capacity.
/plugin marketplace add muratcankoylan/Agent-Skills-for-Context-Engineering/plugin install muratcankoylan-example-skills@muratcankoylan/Agent-Skills-for-Context-EngineeringThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/optimization_techniques.mdscripts/compaction.pyContext optimization extends the effective capacity of limited context windows through strategic compression, masking, caching, and partitioning. The goal is not to magically increase context windows but to make better use of available capacity. Effective optimization can double or triple effective context capacity without requiring larger models or longer contexts.
Activate this skill when:
Context optimization extends effective capacity through four primary strategies: compaction (summarizing context near limits), observation masking (replacing verbose outputs with references), KV-cache optimization (reusing cached computations), and context partitioning (splitting work across isolated contexts).
The key insight is that context quality matters more than quantity. Optimization preserves signal while reducing noise. The art lies in selecting what to keep versus what to discard, and when to apply each technique.
What is Compaction Compaction is the practice of summarizing context contents when approaching limits, then reinitializing a new context window with the summary. This distills the contents of a context window in a high-fidelity manner, enabling the agent to continue with minimal performance degradation.
Compaction typically serves as the first lever in context optimization. The art lies in selecting what to keep versus what to discard.
Compaction Implementation Compaction works by identifying sections that can be compressed, generating summaries that capture essential points, and replacing full content with summaries. Priority for compression goes to tool outputs (replace with summaries), old turns (summarize early conversation), retrieved docs (summarize if recent versions exist), and never compress system prompt.
Summary Generation Effective summaries preserve different elements depending on message type:
Tool outputs: Preserve key findings, metrics, and conclusions. Remove verbose raw output.
Conversational turns: Preserve key decisions, commitments, and context shifts. Remove filler and back-and-forth.
Retrieved documents: Preserve key facts and claims. Remove supporting evidence and elaboration.
The Observation Problem Tool outputs can comprise 80%+ of token usage in agent trajectories. Much of this is verbose output that has already served its purpose. Once an agent has used a tool output to make a decision, keeping the full output provides diminishing value while consuming significant context.
Observation masking replaces verbose tool outputs with compact references. The information remains accessible if needed but does not consume context continuously.
Masking Strategy Selection Not all observations should be masked equally:
Never mask: Observations critical to current task, observations from the most recent turn, observations used in active reasoning.
Consider masking: Observations from 3+ turns ago, verbose outputs with key points extractable, observations whose purpose has been served.
Always mask: Repeated outputs, boilerplate headers/footers, outputs already summarized in conversation.
Understanding KV-Cache The KV-cache stores Key and Value tensors computed during inference, growing linearly with sequence length. Caching the KV-cache across requests sharing identical prefixes avoids recomputation.
Prefix caching reuses KV blocks across requests with identical prefixes using hash-based block matching. This dramatically reduces cost and latency for requests with common prefixes like system prompts.
Cache Optimization Patterns Optimize for caching by reordering context elements to maximize cache hits. Place stable elements first (system prompt, tool definitions), then frequently reused elements, then unique elements last.
Design prompts to maximize cache stability: avoid dynamic content like timestamps, use consistent formatting, keep structure stable across sessions.
Sub-Agent Partitioning The most aggressive form of context optimization is partitioning work across sub-agents with isolated contexts. Each sub-agent operates in a clean context focused on its subtask without carrying accumulated context from other subtasks.
This approach achieves separation of concerns—the detailed search context remains isolated within sub-agents while the coordinator focuses on synthesis and analysis.
Result Aggregation Aggregate results from partitioned subtasks by validating all partitions completed, merging compatible results, and summarizing if still too large.
Context Budget Allocation Design explicit context budgets. Allocate tokens to categories: system prompt, tool definitions, retrieved docs, message history, and reserved buffer. Monitor usage against budget and trigger optimization when approaching limits.
Trigger-Based Optimization Monitor signals for optimization triggers: token utilization above 80%, degradation indicators, and performance drops. Apply appropriate optimization techniques based on context composition.
When to optimize:
What to apply:
Compaction should achieve 50-70% token reduction with less than 5% quality degradation. Masking should achieve 60-80% reduction in masked observations. Cache optimization should achieve 70%+ hit rate for stable workloads.
Monitor and iterate on optimization strategies based on measured effectiveness.
Example 1: Compaction Trigger
if context_tokens / context_limit > 0.8:
context = compact_context(context)
Example 2: Observation Masking
if len(observation) > max_length:
ref_id = store_observation(observation)
return f"[Obs:{ref_id} elided. Key: {extract_key(observation)}]"
Example 3: Cache-Friendly Ordering
# Stable content first
context = [system_prompt, tool_definitions] # Cacheable
context += [reused_templates] # Reusable
context += [unique_content] # Unique
This skill builds on context-fundamentals and context-degradation. It connects to:
Internal reference:
Related skills in this collection:
External resources:
Created: 2025-12-20 Last Updated: 2025-12-20 Author: Agent Skills for Context Engineering Contributors Version: 1.0.0
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.