Organize any LLM-consumed document using three-tier progressive disclosure (lazy prompting). Goal: Maximize LLM working efficiency, NOT minimize line count. Applicable to CLAUDE.md, AGENTS.md, system prompts, runbooks, and any document an LLM reads at session start. Uses three tiers: Level 1 (always loaded), Level 1.5 (path-scoped or conditional), and Level 2 (on-demand via trigger conditions). MUST use this skill when the user: - Says "/progressive-disclosure", "optimize my instructions", "reorganize my docs" - Wants to optimize, reorganize, restructure, or shrink their CLAUDE.md, AGENTS.md, or system prompt - Complains the LLM ignores rules, forgets conventions, or keeps making the same mistakes - Has a document over 200 lines that has grown unwieldy - Mentions "progressive disclosure", "lazy prompting", "progressive discovery", "three-tier" - Says things like "my instructions are too long", "Claude isn't following my rules", "context window is filling up", "reduce my prompt size", "organize my project instructions" - Wants to move detailed SOPs, deployment runbooks, or reference docs out of their main instructions while keeping them accessible - Asks how to structure LLM instructions so they scale as the project grows Even if the user doesn't mention progressive disclosure by name, use this skill whenever the core problem is "instructions are too long or ineffective" or "the LLM doesn't follow my instructions consistently."
npx claudepluginhub pwarnock/pwarnock-cc-plugins --plugin progressive-disclosureThis skill uses the workspace's default tool permissions.
A general methodology for organizing any document an LLM consumes — CLAUDE.md, AGENTS.md, system prompts, runbooks, or any instruction file loaded into context. The core principle: keep high-signal content always loaded, defer low-frequency detail to on-demand files.
Refactors bloated AGENTS.md, CLAUDE.md, or similar files into progressive disclosure structure: essentials in root, categorized instructions (e.g., testing.md, code-style.md) in linked files.
Refactors bloated AGENTS.md, CLAUDE.md, or similar agent instruction files into organized, linked documentation using progressive disclosure principles. Use for long, monolithic agent configs.
Optimizes CLAUDE.md hierarchies, .claude/rules, ecosystem files, and docs/ folders per Anthropic best practices. Detects redundancies, conflicts, and suggests improvements.
Share bugs, ideas, or general feedback.
A general methodology for organizing any document an LLM consumes — CLAUDE.md, AGENTS.md, system prompts, runbooks, or any instruction file loaded into context. The core principle: keep high-signal content always loaded, defer low-frequency detail to on-demand files.
Also known as: lazy prompting (Teresa Torres), progressive discovery.
Complementary skill: After optimizing, use /revise-claude-md (from claude-md-management plugin) to capture future learnings into the right tier — it prevents re-bloat by routing new information appropriately.
Determine which document(s) to optimize:
./CLAUDE.md or ./.claude/CLAUDE.md~/.claude/CLAUDE.md./AGENTS.mdASK USER: "Found {path} ({N} lines). Optimize this file? Or specify a different document."
Create a timestamped backup before making any changes:
cp {target} {target}.bak.$(date +%Y%m%d_%H%M%S)
Confirm backup was created.
Read the entire file. For each section, apply the classification decision tree:
| Question | Yes | No |
|---|---|---|
| Used frequently (most sessions)? | Level 1 | next question |
| Severe consequences if violated? | Level 1 | next question |
| Contains code patterns for direct copying? | Level 1 (keep the pattern) | next question |
| Scoped to specific file paths or conditions? | Level 1.5 | next question |
| Has a clear trigger condition? | Level 2 + trigger in Level 1 | next question |
| Historical or reference material? | Level 2 | Consider removing |
Level 1.5 depends on document type:
.claude/rules/ files with paths: frontmatter (Anthropic's native lazy loading)Present a classification table:
| Section | Lines | Tier | Reason |
|---|---|---|---|
| (section name) | (count) | L1 / L1.5 / L2 | (brief justification) |
Show summary: "X lines staying in L1, Y lines moving to L1.5, Z lines moving to L2"
ASK USER: "Does this classification look right? Want to adjust any rows before I proceed?"
For each section classified as Level 1.5, propose the destination based on document type:
For CLAUDE.md — .claude/rules/ files:
File: .claude/rules/{domain}.md
---
paths:
- "{matching-glob-pattern}"
---
{content to move}
For AGENTS.md — conditional sub-documents:
File: docs/agents/{topic}.md
Activation: When agent handles {specific task type}
For other documents — linked sub-documents:
File: {doc-root}/{topic}.md
Include condition: {when this section is relevant}
ASK USER: "Create these files? Approve, adjust, or skip any."
For each section classified as Level 2, show the proposed reference file:
File: {references-dir}/{topic}-reference.md
Trigger condition: {when this file should be loaded}
Content summary: {keywords describing what's inside}
References directory depends on document type:
docs/references/~/.claude/references/{doc-root}/references/ or co-locatedASK USER: "Create these reference files? Approve, adjust, or skip any."
Preview each structural element to add to the optimized document:
Show a preview of each element populated with the document's actual references.
ASK USER: "Add all structural elements, or skip any?"
Execute the approved changes:
.claude/rules/, sub-documents, etc.)Run verification checks:
/revise-claude-md (from claude-md-management plugin) to capture future learnings — it routes new information to the correct tier automatically/progressive-disclosure periodically if the file grows past the target line count againThe content below serves as background reference. It activates when trigger conditions match outside of explicit
/progressive-disclosureinvocation.
"Find the smallest set of high-signal tokens that maximizes the probability of the desired outcome." — Anthropic
The goal is to maximize LLM working efficiency, not to minimize line count.
Documents accumulate knowledge — SOPs, diagnostics, code patterns, edge cases — that often pushes past manageable sizes. Progressive disclosure solves this tension: keep high-signal content in Level 1 (always loaded) and move detailed reference material to Level 2 (loaded on demand when triggered).
Level 1 (Primary Document) — Loaded every session
+-- Information Recording Principles <- Self-governing rules to prevent future bloat
+-- Reference Index (top) <- Entry 1: "I hit an error -- where do I look?"
+-- Core content (commands, rules, patterns)
+-- Pre-modification checklist <- Entry 2: "I'm about to change X -- watch for what?"
+-- Reference Trigger Index (bottom) <- Entry 3: reminder after long conversations
Level 1.5 (Conditional Loading) — Loaded when conditions match
For CLAUDE.md: .claude/rules/ with paths: frontmatter (Anthropic's native lazy loading)
For AGENTS.md: conditional sub-agent docs
For other docs: modular sections, conditional includes
Level 2 (On-Demand) — Loaded via explicit trigger
+-- Detailed SOP workflows
+-- Edge case handling
+-- Full configuration examples
+-- Historical decision records
Level 1.5 is the most underused mechanism. For CLAUDE.md, .claude/rules/ files with paths: frontmatter load automatically when the LLM touches matching files — zero token cost otherwise. For other document types, equivalent conditional-loading mechanisms exist.
A single Level 2 resource can have multiple entry points that serve different lookup paths:
| Entry Point | Position | Trigger Scenario | User Mindset |
|---|---|---|---|
| Reference Index | Top | Hit an error or problem | "Something broke — which doc should I read?" |
| Pre-modification checklist | Middle | About to change code | "I'm changing X — what pitfalls exist?" |
| Reference Trigger Index | Bottom | Orientation during long conversation | "What was that reference doc again?" |
This is not duplication — it's multiple entry points. Like a book having a table of contents (by chapter), an index (by keyword), and a quick-reference card (by task).
The reason this matters: LLM attention follows a U-shaped curve — the beginning and end of context receive stronger attention than the middle (the "lost in the middle" phenomenon). Placing trigger indexes at both top and bottom ensures they're noticed regardless of conversation length.
For each section, ask these questions in order:
| Question | Yes | No |
|---|---|---|
| Used frequently? | Level 1 | next question |
| Severe consequences if violated? | Level 1 | next question |
| Contains code patterns that need direct copying? | Level 1 (keep the pattern) | next question |
| Scoped to specific file paths or conditions? | Level 1.5 | next question |
| Has a clear trigger condition? | Level 2 + trigger in Level 1 | next question |
| Historical or reference material? | Level 2 | Consider removing |
| Content Type | Reason |
|---|---|
| Core commands | High-frequency use |
| Hard rules / prohibitions | Severe consequences if violated — must always be visible |
| Code patterns | LLM needs to copy directly; avoids re-derivation |
| Error diagnostics | Complete symptom -> cause -> fix flow |
| Directory/structure map | Helps LLM locate files quickly |
| Trigger index tables | Helps LLM find Level 2 during long conversations |
| Content Type | Level 1 keeps | Level 2 gets |
|---|---|---|
| SOP workflows | Trigger condition + key pitfalls | Full step-by-step |
| Config examples | The 1-2 most common | Complete configuration |
| API documentation | Common method signatures | Full parameter reference |
| Historical decisions | Nothing (or a one-liner) | Full rationale |
| Performance data | Nothing | Full benchmarks |
| Edge cases | Nothing | Detailed handling |
Use a mix of these formats — variety helps the LLM distinguish different types of references.
1. Detailed Format (important in-body references):
**When to read `{references-dir}/xxx-sop.md`**:
- [specific error message]
- [specific scenario]
> Contains: [keyword 1], [keyword 2], [code template].
2. Problem Trigger Table (top/bottom index):
## Reference Index
| Trigger | Document | Key Content |
|---|---|---|
| `ERR_DLOPEN_FAILED` | `native-modules-sop.md` | ABI mechanism, lazy loading |
3. Task Trigger Table (pre-modification checklist):
## Pre-Modification Checklist
| What you're changing | Read this first | Key pitfalls |
|---|---|---|
| Native module code | `native-modules-sop.md` | Must lazy-load |
4. Inline Format (brief references):
Full workflow in `database-sop.md` (FTS5 escaping, health checks).
| Anti-Pattern | Example | Fix |
|---|---|---|
| Over-Compression | 2,937 lines compressed to 165 | Keep all high-frequency content; 482 organized lines beats 165 stripped lines |
| References Without Triggers | See xxx.md | Always pair with trigger condition + content summary |
| Code Patterns in Level 2 | Moving frequently-used code examples to reference files | High-frequency code patterns stay in Level 1 |
| Deleting Instead of Moving | Removing "unimportant" sections entirely | Move to Level 2 with a trigger; never delete knowledge |
| Check | Passing Criteria |
|---|---|
| Daily commands | No need to read Level 2 |
| Common errors | Has complete diagnostic flow |
| Code writing | Has copy-paste patterns |
| Specific problems | Knows which Level 2 to read |
| Trigger indexes | Table format at top and bottom |
Too little: LLM repeatedly asks same questions, re-derives patterns, user corrects same rules. Too much: Low-frequency workflows in Level 1, identical content duplicated (not multi-entry), edge cases mixed with common cases.
| Dimension | User-Level CLAUDE.md | Project-Level CLAUDE.md | AGENTS.md | Other Docs |
|---|---|---|---|---|
| Target L1 lines | 100-200 | 200-400 | 150-300 | Varies |
| L1.5 mechanism | ~/.claude/rules/ | .claude/rules/ | Sub-agent docs | Conditional includes |
| L2 location | ~/.claude/references/ | docs/references/ | docs/agents/ | Co-located references/ |
| Shares with | Just you | Team via source control | Team | Depends on doc |
After optimizing, verify:
For detailed case studies and lessons learned, see references/progressive-disclosure-principles.md.