From get-toony
Use when the user wants to convert CLAUDE.md, a system prompt, or any long instruction file into TOON — or asks how to compress / restructure a long prose prompt. Pushes back on TOON for prose (wrong tool) and recommends XML-tag section delimiters, the Anthropic-recommended pattern for long/complex prompts. Acts as a guardrail against TOON-ifying instruction files where the format would degrade comprehension.
npx claudepluginhub danielrosehill/claude-code-plugins --plugin get-toonyThis skill uses the workspace's default tool permissions.
This skill exists because TOON is the **wrong tool** for prose instruction files (CLAUDE.md, system prompts, long user prompts) — but the instinct to "compress my long prompt" is real and valid. This skill redirects that instinct to a better technique: **XML section tags**.
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Share bugs, ideas, or general feedback.
This skill exists because TOON is the wrong tool for prose instruction files (CLAUDE.md, system prompts, long user prompts) — but the instinct to "compress my long prompt" is real and valid. This skill redirects that instinct to a better technique: XML section tags.
Trigger when the user says any of:
Do not fire for: data files (JSON/CSV/YAML), configuration, or repeating-record content. Those go to the other skills in this plugin.
TOON is built for structured data — uniform records with shared keys (inventory rows, log entries, dataset slices). Its compactness comes from sharing field headers across many records.
Instruction files are prose directives. There's no repeating-record structure for TOON to compress. Converting prose to TOON either forces it into awkward key-value shapes (which degrades how reliably the LLM follows the instructions) or just wraps strings in TOON scaffolding (which adds tokens and saves nothing).
Will Claude parse TOON in a CLAUDE.md? Yes — Claude reads whatever bytes are in the file. But "parses" ≠ "follows as well as prose." LLMs are trained on enormous markdown/prose corpora; they internalize directives in that form most reliably.
For long or complex prompts where sections are getting blurred or ignored, wrap each section in XML tags. This is Anthropic's own recommended pattern (see the prompt-engineering guide on <thinking>, <example>, etc.).
Why it works:
<autonomy> here").Before (markdown headings):
## Autonomy Directive
Just keep going. Don't ask permission...
## Naming
Repos use Train-Case...
## Plans
Save plans to planning/ in the current repo...
After (XML-tagged sections):
<autonomy>
Just keep going. Don't ask permission...
</autonomy>
<naming>
Repos use Train-Case...
</naming>
<plans>
Save plans to planning/ in the current repo...
</plans>
Markdown inside the tags still renders/reads fine. The tags give Claude unambiguous section boundaries.
XML tags are a surgical fix, not a wholesale rewrite. Recommend them when:
For a short CLAUDE.md (say, < 50 lines of prose), markdown headings are fine — don't churn it.
Edit.MEMORY.md index) — already a one-line-per-entry list, no benefit from restructuring.| Format | Verdict for instruction files |
|---|---|
| Markdown | Best default. Heavily represented in LLM training data. |
| XML tags | Best for long/complex prompts with hard section boundaries. |
| Plain prose | Fine but loses scannable structure. |
| YAML | Good for config (key: value), bad for prose — strings get awkward. |
| TOON / JSON | Wrong tool. Built for data. |
| TOML | Same problem as YAML for prose. |
The recommendation hierarchy is: markdown → markdown + XML tags for long/complex → never a data format.