From antigravity-skills
Compacts KV cache of orchestrator trajectories for task-relevant sharing with workers in multi-agent LLM systems, using Attention Matching to cut token costs without summarization.
npx claudepluginhub guanyang/antigravity-skillsThis skill uses the workspace's default tool permissions.
Hierarchical multi-agent systems often pay for the same context twice. The orchestrator accumulates a long reasoning trajectory, but each worker usually receives only a narrow text handoff such as a subtask prompt plus raw document slices. Passing the full trajectory fixes coverage but drives token cost up on every worker call. Summarization introduces latency and information loss. Retrieval he...
Guides agent memory implementation, compares frameworks (Mem0, Zep/Graphiti, Letta, LangMem, Cognee), and designs persistence for cross-session retention and temporal knowledge graphs.
Installs bundles of skills for LLM context engineering, multi-agent architectures, memory systems, tool design, and agent evaluation in Claude Code and Cursor.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
Hierarchical multi-agent systems often pay for the same context twice. The orchestrator accumulates a long reasoning trajectory, but each worker usually receives only a narrow text handoff such as a subtask prompt plus raw document slices. Passing the full trajectory fixes coverage but drives token cost up on every worker call. Summarization introduces latency and information loss. Retrieval helps with document access but does not preserve the orchestrator's evolving reasoning state.
Latent Briefing addresses this by sharing memory at the representation level rather than the text level. The core idea is to compact the orchestrator trajectory in the worker model's KV cache, keeping positions that are most relevant to the current worker task. The method builds on Attention Matching (AM) KV cache compaction and adapts it for inference-time multi-agent handoff with task-guided queries, a shared token mask across heads, and robust thresholding.
Activate this skill when:
The token explosion pattern. In recursive or REPL-style systems, the orchestrator repeatedly calls a worker to inspect evidence, verify hypotheses, or answer subquestions. The orchestrator's trajectory grows with partial conclusions, dead ends, tool output, and prior worker responses. If that trajectory is passed in full on every worker call, cost compounds quickly.
Representation-level sharing. Instead of summarizing the trajectory into natural language, the system operates on the worker model's KV cache. It retains the positions that the worker would attend to for the current task and drops the rest. This is more specific than ordinary prefix caching: prefix caching reuses identical prefixes, while Latent Briefing also performs task-conditioned selective retention inside the reused trajectory.
Attention Matching as the compaction engine. AM seeks a smaller cache whose attention outputs approximate the full cache. Latent Briefing adapts AM for multi-agent inference by changing the scoring signal and batching strategy:
median + tau * MAD rather than fixed top-k per head.Reference result shape. The public write-up reports substantial worker-token reduction, material total-token savings, and low-single-digit-second compaction overhead on long-document QA workloads. Treat these numbers as workload-specific evidence, not a general guarantee.
| Approach | Primary weakness |
|---|---|
| LLM summarization | High latency, lossy abstraction, and no guarantee the summary preserves what the next subtask needs |
| Retrieval / RAG | Depends on chunking and embeddings; can miss cross-chunk or cross-step dependencies |
| Pass full trajectory | Cost scales with every worker call and irrelevant context can degrade worker quality |
Latent Briefing is useful when the bottleneck is not document retrieval itself, but how to transfer orchestrator state into a worker efficiently and precisely.
Frameworks such as Recursive Language Models treat long context as an environment and recurse over it: an orchestrator decomposes work and delegates to workers. Latent Briefing fits the gap where the orchestrator has already built task-specific state that should inform the worker, but re-serializing that state as text is too expensive or noisy.
In the ideal setup, the worker maintains a persistent KV state for the orchestrator trajectory. New trajectory tokens extend that state, then compaction runs just before generation for the current subtask.
Task-guided query vectors. Use queries from the current worker task prompt, not generic samples from the context. Forward-pass the trajectory plus current task through the worker model, then score trajectory positions by how strongly the task attends to them.
Shared token selection. Aggregate scores across layers and heads into one per-position score. One shared mask enables batched operations and avoids hundreds of incompatible per-head solves.
MAD thresholding. Keep positions above a robust outlier threshold such as median + tau * MAD. Higher tau is more aggressive. Optimal settings depend on task regime, trajectory quality, and document length.
Latent Briefing is only practical when the system controls the worker inference runtime closely enough to inspect or transform KV state. It is a poor default for API-only stacks where internal KV tensors are inaccessible. It also assumes the orchestrator trajectory can be represented in the worker's model space. If orchestrator and worker differ materially in tokenizer, architecture, or attention layout, direct representation sharing may not be viable.
Choose the mechanism that matches the bottleneck:
| Need | Prefer | Why |
|---|---|---|
| Stable repeated prefix with minimal logic changes | Prefix caching | Cheapest optimization; no information loss |
| Human-readable and auditable cross-step state | Structured notes or summarization | Easy to inspect and store |
| Sparse lookup across a large external corpus | Retrieval / RAG | Finds documents efficiently |
| Worker needs task-specific slices of orchestrator state and runtime access exists | Latent Briefing | Transfers relevant latent state without replaying all text |
Latent Briefing is not a universal replacement for summarization or retrieval. It is a specialized optimization for systems that already run a controllable orchestrator-worker stack.
Reported long-document QA results suggest:
These are tuning hypotheses, not portable laws. Re-measure on the target workload.
Scenario: orchestrator trajectory grows across worker calls
Call 1: trajectory T1 -> worker answers subquestion A
Call 2: trajectory T2 = T1 + new reasoning + reply A
compact KV(T2) using the task prompt for B
worker answers subquestion B
The task prompt for B decides which parts of T2 survive into the compacted worker state.
tau rarely works across long vs short context and easy vs hard tasks. Expect accuracy cliffs when compaction becomes too aggressive.Internal reference:
Related skills in this collection:
External resources:
Created: 2026-04-14 Last Updated: 2026-04-14 Author: Agent Skills for Context Engineering Contributors; primary technical source Ramp Labs (public post) Version: 1.1.0