From antigravity-awesome-skills
Prunes redundant context, manages token budgets, and enforces ultra-concise, filler-free responses to optimize AI agents in long-running dev workflows and large codebases.
npx claudepluginhub sickn33/antigravity-awesome-skillsThis skill uses the workspace's default tool permissions.
This skill implements a "Gatekeeper" logic to prevent context window bloat and unnecessary token expenditure. It ensures the agent only processes relevant data shards and adheres to an Atomic Precision protocol—delivering functional answers with zero conversational filler. By recursively summarizing state and stripping "bridge phrases," it maximizes the longevity and speed of long-running devel...
Enforces concise responses, parallel tool execution, no redundant work, exploration tracking, and proactive context compression in every Claude Code session. Auto-applies at start.
Teaches context engineering ops: Write to persist, Select relevant info, Compress tokens, Isolate to manage budgets and keep AI coding sessions efficient.
Compresses verbose responses by eliminating filler, hype, hedging, framing, and transitions to save 200-400 tokens per response while preserving clarity. Use for token-efficient, direct AI outputs.
Share bugs, ideas, or general feedback.
This skill implements a "Gatekeeper" logic to prevent context window bloat and unnecessary token expenditure. It ensures the agent only processes relevant data shards and adheres to an Atomic Precision protocol—delivering functional answers with zero conversational filler. By recursively summarizing state and stripping "bridge phrases," it maximizes the longevity and speed of long-running development workflows.
Scan the available data for headers, summaries, and key indicators. Create a "map" of the context rather than injecting the full source. Never pull the entire file into the prompt unless a specific, narrowed fragment is requested.
Calculate a "Safe Response Limit" based on the current context window. Allocate 30% for current logic processing, 20% for immediate output, and 50% for a future context buffer.
Strip all "Bridge Phrases" (e.g., "I've updated the code," "Based on your request," "Sure"). Start the response immediately with the solution or the code block.
Before executing Step 3, scan for missing critical variables (e.g., specific file names or environment types). If the prompt is too ambiguous, bypass the atomic output and generate exactly one concise question to resolve the blocker.
Summarize the current turn into a "compressed state string" (e.g., [Project: Feasify | State: Auth-Fixed | Remaining-Tasks: 2]) to discard redundant conversational data before the next prompt.
User: "Update the Firebase config to use environment variables."
const firebaseConfig = {
apiKey: process.env.VITE_FIREBASE_API_KEY,
authDomain: process.env.VITE_FIREBASE_AUTH_DOMAIN,
projectId: process.env.VITE_FIREBASE_PROJECT_ID
};
User: "Deploy the function."
"Specify environment: production or staging?"
// crucial step) for critical notes.Problem: The response is so brief it lacks the context needed for implementation. Solution: Use concise inline code comments instead of separate paragraphs of text.
Problem: The agent loses the overarching goal due to over-compression. Solution: Always pin the "Primary Objective" to the top of every pruned prompt.
@atomic-precision-response - Specifically for removing conversational filler.@context-sharding - For managing large-scale documentation mapping.