From markdown-compressor
This skill should be used when the user asks to 'compress markdown', 'shrink this file', 'optimize tokens', 'reduce file size', 'compress instructions', 'make this more concise', 'minimize this prompt', 'compress CLAUDE.md', 'compress ARCHITECTURE.md', 'optimize agent instructions', or wants to reduce token usage in LLM-facing markdown files. Covers lossless structural optimization and lossy semantic compression with section-by-section analysis.
npx claudepluginhub oborchers/fractional-cto --plugin markdown-compressorThis skill uses the workspace's default tool permissions.
Markdown compression reduces token consumption in LLM-facing documentation — agent instructions, CLAUDE.md files, ARCHITECTURE.md files, system prompts, and skill definitions — while preserving the information an LLM needs to operate correctly.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Retrieves current documentation, API references, and code examples for libraries, frameworks, SDKs, CLIs, and services via Context7 CLI. Ideal for API syntax, configs, migrations, and setup queries.
Uses ctx7 CLI to fetch current library docs, manage AI coding skills (install/search/generate), and configure Context7 MCP for AI editors.
Markdown compression reduces token consumption in LLM-facing documentation — agent instructions, CLAUDE.md files, ARCHITECTURE.md files, system prompts, and skill definitions — while preserving the information an LLM needs to operate correctly.
Two modes address different risk tolerances:
| Mode | What Changes | Risk | Best For |
|---|---|---|---|
| Lossless | Structure only — whitespace, formatting, redundant syntax | Zero semantic change | Safe first pass on any file |
| Lossy | Semantics — rewriting for density, removing filler, consolidating | Information loss possible | Deep compression with review |
Section-by-section compression with user approval at each step is the recommended workflow. The /compress command provides a guided session; the skill also activates when compression-related work is detected mid-conversation.
LLM instructions are not prose for humans. Compression targets what LLMs ignore or process redundantly:
Always safe to remove:
Never remove:
Judgment required:
Lossless compression changes structure without altering semantics. Apply these transformations:
For detailed lossless transformation rules and before/after examples, consult references/lossless-techniques.md.
Lossy compression rewrites for semantic density. Apply the compressor-reviewer loop per section:
For each section:
The reviewer specifically checks for:
For detailed lossy techniques and worked examples, consult references/lossy-techniques.md.
Before compressing, analyze the file structure to determine section boundaries and identify problem areas:
## for most files, ### if ## sections are very largePresent the structural analysis as a table to the user before beginning compression. This gives the user a map of the document and sets expectations for where the biggest savings will come from.
After compression, report a summary so the user can assess the impact:
The words * 1.3 heuristic estimates tokens for typical English markdown. Actual token counts depend on the model's tokenizer, but relative reduction percentages are reliable for comparison.
For detailed techniques and patterns, consult:
references/lossless-techniques.md — Complete lossless transformation catalog with before/after examplesreferences/lossy-techniques.md — Lossy compression patterns, judgment heuristics, and information-density techniquesWorked compression sessions in examples/:
examples/before-after-lossless.md — CLAUDE.md file compressed with lossless modeexamples/before-after-lossy.md — Agent instruction file compressed with lossy mode