Use when the user asks to "optimize context", "reduce token usage", "prune context window", "configure progressive loading", or "manage session state". Activates when a stakeholder needs to optimize context window usage, configure progressive MOAT loading levels, design intelligent pruning strategies, manage session state persistence, or implement token-efficient skill routing across the agent framework.
From pmnpx claudepluginhub javimontano/mao-pm-apexThis skill is limited to using the following tools:
evals/evals.jsonexamples/README.mdexamples/sample-output.mdprompts/metaprompts.mdprompts/use-case-prompts.mdreferences/body-of-knowledge.mdreferences/knowledge-graph.mmdreferences/state-of-the-art.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Provides React and Next.js patterns for component composition, compound components, state management, data fetching, performance optimization, forms, routing, and accessible UIs.
TL;DR: Optimizes context window usage through progressive MOAT loading (L1/L2/L3), intelligent pruning, session state management, and token-efficient skill routing. Ensures the AI agent operates within context limits while maintaining access to the knowledge needed for the current task.
El contexto es un recurso finito. Cargar los 100 skills completos excede cualquier ventana de contexto. La carga progresiva (L1 metadata, L2 core, L3 deep) permite acceder al conocimiento correcto en el momento correcto. La optimización de contexto no es ahorro — es precisión en la información cargada.
project/ directory writability [SUPUESTO]# Optimize context for a specific phase and project type
/pm:context-optimization $PROJECT --phase="planning" --tipo="agile"
# Analyze current context usage and recommend pruning
/pm:context-optimization $PROJECT --type=analyze
# Configure session state persistence rules
/pm:context-optimization $PROJECT --type=session-state --persist="essential"
Parameters:
| Parameter | Required | Description |
|---|---|---|
$PROJECT | Yes | Project identifier |
--phase | No | Current pipeline phase for skill selection |
--tipo | No | Project type for routing optimization |
--type | No | analyze, optimize, session-state, prune |
--persist | No | Session persistence level (minimal, essential, full) |
{TIPO_PROYECTO}: All project types benefit from context optimization. Complex engagements need L3 for active skills; routine operations use L1/L2.
scripts/lazy-load-resolver.sh to verify resolver availabilityproject/session-state.json to check current context configurationproject/context-archive/. Notify user of reduced capability. [PLAN]project/session-state.json. If file missing, re-prime from last known good state. Flag data loss to user. [SUPUESTO]project/ for reference. Process in chunks if needed. [PLAN]Good Context Optimization:
| Attribute | Value |
|---|---|
| Skills loaded | 5 at L1, 2 at L2, 1 at L3 |
| Context utilization | 75% of available window |
| Session state | Essential state persisted in JSON |
| Pruning applied | 3 irrelevant skills removed |
| Lazy loading | 2 skills promoted on demand |
| Efficiency | 40% reduction vs. full loading |
Bad Context Optimization: Loading all 100 skills at L3 into context, overflowing the window, and producing degraded responses because critical information is truncated. No pruning, no prioritization, no session state management. Fails because it treats context as infinite rather than as a resource to be managed.
project/session-state.json after each significant interaction| Resource | When to read | Location |
|---|---|---|
| Body of Knowledge | Before optimizing to understand MOAT loading architecture | references/body-of-knowledge.md |
| State of the Art | When evaluating context management approaches | references/state-of-the-art.md |
| Knowledge Graph | To understand skill dependency graph for loading priority | references/knowledge-graph.mmd |
| Use Case Prompts | When configuring optimization for specific workflows | prompts/use-case-prompts.md |
| Metaprompts | To generate context loading configurations | prompts/metaprompts.md |
| Sample Output | To calibrate expected optimization report format | examples/sample-output.md |
Prunes stale or low-priority content from context. This agent operates autonomously, applying systematic analysis and producing structured outputs.
Resolves lazy-loaded content on demand. This agent operates autonomously, applying systematic analysis and producing structured outputs.
Implements progressive loading for skill and reference content. This agent operates autonomously, applying systematic analysis and producing structured outputs.
Analyzes context window token usage and optimization opportunities. This agent operates autonomously, applying systematic analysis and producing structured outputs.