From plugin-creator
Optimizes AI-facing instructions for subagents, CLAUDE.md, rules, skills, agents by removing discoverable data, explained knowledge, invented constraints, stale facts. Use for authoring or auditing bloat.
npx claudepluginhub jamie-bitflight/claude_skills --plugin plugin-creatorThis skill uses the workspace's default tool permissions.
The AI writing an instruction file and the AI reading it share the same training data and reasoning capability. Write only what that shared baseline cannot supply.
Maintains CLAUDE.md and AGENTS.md instruction files by enforcing size limits (<300 lines), progressive disclosure via docs/ references, multi-agent compatibility, and tool-first content. Use for creation, updates, audits.
Refactors bloated AGENTS.md, CLAUDE.md, or similar agent instruction files into organized, linked documentation using progressive disclosure principles. Use for long, monolithic agent configs.
Refactors bloated AGENTS.md, CLAUDE.md, or similar files into progressive disclosure structure: essentials in root, categorized instructions (e.g., testing.md, code-style.md) in linked files.
Share bugs, ideas, or general feedback.
The AI writing an instruction file and the AI reading it share the same training data and reasoning capability. Write only what that shared baseline cannot supply.
references/. Keep the main skill and memory files lean with just routing logic and core principles.User decisions — choices the project made that differ from defaults: "We use pnpm, not npm." "Dark background, cream foreground."
Project-specific facts — paths, scripts, and tools that only exist here: "Run node .claude/scripts/gh-api.cjs release latest <owner>/<repo> to get the current major."
Constraints — things that would be done differently without this instruction: "Never copy a uses: version from another workflow file without verifying it."
Mission-level anchoring — This maintains alignment with the core purpose over time and provides a heuristic for decisions: "Does this change actually help achieve the mission of this system/task/product?" State the core principle (e.g., "all CI ops go through ci_monitor.py") and provide a routing table, rather than repeating granular details.
Discoverable data — version numbers, hex codes, release tags, file listings, schema fields, or command --help outputs. If a command or lookup can produce it, don't store it. It will be wrong within days. Instead, verify that it is discoverable, and replace the prose with an instruction on where to discover the data and when to reach for which tool (e.g.,
"When doing A, B, or C then first read these references here: ").
For CLI tools, instruct the AI to discover arguments at runtime: "[ ] Run the command with --help and read CLI arguments before using it in a task."
Explained knowledge — step-by-step breakdowns of things Claude already knows how to do. It can be ambiguous whether something is a custom instruction set or just from training data. When optimizing a file, remove information generated from training data, as it will already be available for the other AI. Often, when writing documentation, the AI will waffle on and invent instructions just to get the document looking complete. These instructions often are never tested. A good way to see if something is slop is to follow the instructions or have a subagent follow them exactly, step by step, and find out if they work. This then provides a way to improve the instructions and update them if errors were found. If the instruction just explains how to do something standard, cut it.
Invented constraints — rules, fallback patterns, schemas that weren't requested and have no verified basis. Use a subagent or research it yourself to check if the constraint is based on a spec. If we generated the spec, evaluate whether the constraint helps achieve the product goal or if it is just noise. A common dangerous constraint added by AI is string length truncation (e.g., head -50, tail -50, array notation like some_array[:50], or appending ellipses like short part of the text ...). This artificially hides information in a way that can't be tracked and leads to silent issues. If you can't cite the source or session that established a constraint, or if it's a dangerous truncation, cut it.
Worked examples for obvious operations — one example that shows a non-obvious pattern format is useful. Three examples walking through the same operation are not. Replace extra examples with: "See for more examples."
Duplicate content — if it's in the skill, don't put it in the agent. If it's in the agent, don't put it in the rule. Pick the right place and put it there once. Use references: "Load ./references/{topic}.md {when, before, to} {action, task, specialist knowledge req}"
Read the reference for the content type you are writing or auditing:
.claude/rules/ → references/memory-and-rules.mdreferences/skills.md. Load best-practices.md before writing reference pointers — it defines the correct Load [file](./path) format.references/agents.mdExecute this optimization surgically, making multiple passes. Use subagents to gather information, research, and validate instructions whenever possible.
*.bak copy).--help) or specific URLs.--help before using."references/ files.