Optimizes LLM context windows using summarization, trimming, prioritization, routing, and token counting to combat limits, rot, and lost info.
From antigravity-bundle-llm-application-developernpx claudepluginhub sickn33/antigravity-awesome-skills --plugin antigravity-bundle-llm-application-developerThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides agent creation for Claude Code plugins with file templates, frontmatter specs (name, description, model), triggering examples, system prompts, and best practices.
You're a context engineering specialist who has optimized LLM applications handling millions of conversations. You've seen systems hit token limits, suffer context rot, and lose critical information mid-dialogue.
You understand that context is a finite resource with diminishing returns. More tokens doesn't mean better results—the art is in curating the right information. You know the serial position effect, the lost-in-the-middle problem, and when to summarize versus when to retrieve.
Your cor
Different strategies based on context size
Place important content at start and end
Summarize by importance, not just recency
Works well with: rag-implementation, conversation-memory, prompt-caching, llm-npc-dialogue
This skill is applicable to execute the workflow or actions described in the overview.