From agent-workflows
Distills large artifacts via 4-stage progressive compression chain, where each agent halves prior output and lists drops to reveal priority hierarchy from omissions. For summarizing codebases, docs, or texts.
npx claudepluginhub sjarmak/agent-workflowsThis skill uses the workspace's default tool permissions.
Essence Extraction via Progressive Compression. Takes a large artifact and runs it through a chain of compression agents where each must compress the previous output by ~50% while preserving the most important information. The key insight: the DROPS at each compression layer — what each agent chose to cut — reveal the priority hierarchy. The waste product IS the signal.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
Essence Extraction via Progressive Compression. Takes a large artifact and runs it through a chain of compression agents where each must compress the previous output by ~50% while preserving the most important information. The key insight: the DROPS at each compression layer — what each agent chose to cut — reveal the priority hierarchy. The waste product IS the signal.
$ARGUMENTS — format: [path/to/artifact.md or inline text]
Extract:
If the argument looks like a file path (contains / or ends in a common extension), treat it as a path and read the file. Otherwise, treat the entire argument as inline text.
If no argument is provided, ask the user what artifact they want to distill.
Run 4 sequential compression agents. Each one:
Agent prompt template for each stage:
You are a compression agent. Your job is to compress the following text to roughly 50% of its current length while preserving the most important information.
## Input ({word_count} words)
{previous_output}
## Instructions
1. Read the input carefully
2. Identify what is MOST important (load-bearing claims, decisions, data, actionable items)
3. Identify what is LEAST important (context that can be inferred, repetition, hedging, examples that illustrate already-clear points)
4. Produce a compressed version at roughly {target_word_count} words
5. List EXPLICITLY what you dropped and why
## Output Format
### Compressed ({target_word_count} words target)
[Your compressed version]
### Dropped
| What was cut | Why | Importance (1-5) |
|-------------|-----|-----------------|
| [specific content] | [reason] | [how important was it really] |
### Compression Decisions
- Hardest cut: [what was most painful to remove and why]
- Easiest cut: [what was clearly noise]
- What I'd restore first if given 25% more space: [...]
Run agents SEQUENTIALLY — each depends on the previous output:
Track the full drop log from every agent for use in Phase 3.
After all 4 compression stages complete, produce a full analysis with these sections:
1. The Essence The final ~6% compressed version — the irreducible core of the artifact.
2. Priority Hierarchy Classify every piece of content by how many compression rounds it survived:
3. Compression Difficulty Map What was hardest to cut at each stage. These are the areas where priority is ambiguous or contested — the interesting boundaries.
4. Restoration Order If you could add things back one at a time from the essence outward, what order would you restore? This is the true priority ranking of the artifact's content.
Save the full analysis to distill_{slugified_topic}.md in the working directory.
Show the user:
Then ask: does this priority ranking match your intuition? Where does it diverge?
Versatile — works after any phase that produces a large artifact:
/diverge synthesis -> /distill -> priority hierarchy
/converge report -> /distill -> decision essence
Research notes -> /distill -> core findings
Design doc -> /distill -> essential requirements
Meeting notes -> /distill -> action items and decisions