From prompt-architecture
Designs LLM context windows: allocates token budgets, orders information for attention, selects relevant data, and applies RAG/summarization strategies.
npx claudepluginhub owl-listener/ai-design-skills --plugin prompt-architectureThis skill uses the workspace's default tool permissions.
The context window is finite. What goes into it — and in what order — determines the quality of every output. Context engineering is the practice of deliberately designing the information architecture of the context window.
Teaches context engineering for AI agents: anatomy, attention curves, position-aware placement, progressive disclosure, and token budgeting to debug issues and optimize usage.
Explains LLM context fundamentals: system prompts, tool definitions, message history, and constraints. Useful for agent design, debugging behavior, optimization, and onboarding.
Guides token budgeting, placement effects, RAG patterns, prompt caching, compression, and multi-turn strategies for LLM applications. Use for context windows, budgets, overflow, and optimization.
Share bugs, ideas, or general feedback.
The context window is finite. What goes into it — and in what order — determines the quality of every output. Context engineering is the practice of deliberately designing the information architecture of the context window.
Every context window has a token budget. Allocate it deliberately:
Order matters. The model pays different amounts of attention to different positions:
Not everything should go into the context. Design selection criteria:
How to tell if your context engineering is working: