Cleans AI-generated code by systematically removing LLM smells like dead code, over-commenting, verbose naming while preserving exact behavior via tests. Use after generation or on 'clean up', 'deslop' requests.
npx claudepluginhub tmdgusya/engineering-discipline --plugin engineering-disciplineThis skill uses the workspace's default tool permissions.
A corrective discipline for cleaning AI-generated code. Runs after code generation — whether from `run-plan`, a manual session, or any other source.
Detects and removes AI-generated code slop like unnecessary comments, over-engineering, verbose error handling, premature abstractions, and filler patterns. Cleans to senior engineer standards while preserving functionality.
Finds and removes AI-generated slop including useless comments, verbose patterns, and unnecessary abstractions from code. Useful for cleaning AI-assisted codebases.
Removes AI-generated code artifacts like debug logs, placeholders, TODOs, and dead code via three-phase certainty-graded cleanup. Use after AI sessions or before PRs on JS/TS, Python, Rust files.
Share bugs, ideas, or general feedback.
A corrective discipline for cleaning AI-generated code. Runs after code generation — whether from run-plan, a manual session, or any other source.
The core problem: LLMs produce code that works but carries distinctive smells. Over-commenting, unnecessary abstractions, defensive paranoia for impossible scenarios, verbose naming. Left unchecked, these accumulate into a codebase that is harder to read and maintain than hand-written code.
This skill removes those smells systematically, one category at a time, without changing behavior.
These rules have no exceptions.
run-plan completes and the implementation works but reads like AI wrote itPasses execute in this order. Each pass completes fully before the next begins.
Remove code that serves no purpose.
Detection: compiler warnings, linter output, IDE grayed-out symbols. Trust the tooling.
Remove comments that restate what the code already says.
Targets:
// Initialize the counter above let counter = 0// Return the result above return result// --- Helper Functions ---)Keep: comments that explain why, not what. Comments about non-obvious constraints. Links to external documentation or issues.
Remove indirection that serves no purpose.
Targets:
Test: if removing the abstraction makes the code shorter and equally readable, it was unnecessary.
Remove error handling for scenarios that cannot occur.
Targets:
Keep: validation at system boundaries (user input, external APIs, file I/O). Error handling where the runtime genuinely can fail.
Shorten names that carry redundant information.
Targets:
getUserDataFromDatabase → getUser (where else would it come from?)userAccountStatus → status (when used inside a User class, the prefix is redundant)handleButtonClickEvent → onClickresponseDataObject → responsetempVariableForCalculation → temp or inline itRule: a name should be as short as possible while remaining unambiguous in its scope. Longer scope = longer name. Short scope = short name.
Remove artifacts of LLM generation style.
Targets:
console.log / print statements added "for debugging"1. Identify scope (which files to clean)
2. Run existing tests — all must pass before starting
3. Add regression tests if coverage is thin
4. Execute Pass 1 → verify → commit
5. Execute Pass 2 → verify → commit
6. ... continue through all relevant passes
7. Run full test suite
8. Report summary of changes
Not every pass applies to every codebase. Skip passes that have zero findings. But execute in order — never jump ahead.
| Impulse | Why It Fails |
|---|---|
| "I'll clean everything in one big pass" | Mixed changes are impossible to debug when tests break |
| "This abstraction is bad, let me redesign it" | Redesign is a separate task, not cleanup |
| "Tests pass, so I'll skip the per-pass verification" | A later pass may interact with an earlier change |
| "This code nearby also looks sloppy" | Scope creep. Only clean what's in scope |
| "The behavior is wrong anyway, I'll fix it while cleaning" | Behavior changes require their own task with their own tests |
| "I don't need regression tests, the code is simple" | Simple code breaks too. Lock behavior first |
Stop and reconsider if you catch yourself thinking:
Cleanup is done when:
If any of these are not met, the cleanup is not complete.
During cleanup, verify against this list:
After cleanup is complete:
run-plan → report results to the userkarpathy in future sessions to prevent slop at the source