Enforces Karpathy guidelines to prevent LLM coding errors: read before writing, surgical changes only, verify assumptions, define success upfront. Use for feature implementation, code modifications, or scope discipline.
npx claudepluginhub tmdgusya/engineering-discipline --plugin engineering-disciplineThis skill uses the workspace's default tool permissions.
A preventive thinking discipline for code implementation. Activates before and during code writing to block the most common mistakes LLMs make when generating code.
Applies Karpathy guidelines to reduce LLM coding mistakes: think before coding, prioritize simplicity, make surgical changes, and define verifiable success criteria when writing, reviewing, or refactoring code.
Guides code writing, review, and refactoring with Karpathy-inspired rules to avoid overcomplication, ensure simplicity, surgical changes, and verifiable success criteria.
Installs CLAUDE.md with Andrej Karpathy-derived principles to prevent LLM coding pitfalls: silent assumptions, overengineering, unrelated edits, vague execution.
Share bugs, ideas, or general feedback.
A preventive thinking discipline for code implementation. Activates before and during code writing to block the most common mistakes LLMs make when generating code.
This is not about performance (that's rob-pike) or debugging (that's systematic-debugging). This is about the act of writing code itself — reading before writing, changing only what's asked, verifying instead of assuming, and defining what "done" means before starting.
These rules have no exceptions.
rob-pike instead)Every change should be the minimum edit that achieves the goal.
Before writing, ask:
Prohibited additions unless explicitly requested:
One task, one change. If you discover something else that needs fixing, note it — don't fix it now.
LLMs generate code based on patterns. Codebases have their own patterns. These often conflict.
Before modifying any file:
Before modifying any function:
Before adding a new file:
Do not invent new patterns. Follow the ones that exist.
Every assumption is a potential bug. The most dangerous assumptions are the ones that feel obvious.
Common assumptions that cause failures:
| Assumption | Verification |
|---|---|
| "This function returns X" | Read the function |
| "This field is always present" | Check the type definition and upstream producers |
| "This test covers that case" | Read the test |
| "This import path is correct" | Check the file exists at that path |
| "This API accepts these parameters" | Read the API definition or documentation |
| "This library works this way" | Check the version and docs |
| "This config value is set" | Check the actual config |
When in doubt, grep. When confident, grep anyway.
Before writing code, state what "done" means.
Format:
Done when:
- [ ] <specific, verifiable condition>
- [ ] <specific, verifiable condition>
- [ ] <specific, verifiable condition>
Bad criteria:
Good criteria:
If you can't write specific criteria, you don't understand the task. Go back and clarify.
LLMs love to anticipate future needs. This produces code that is more complex than necessary.
Block these impulses:
Build for what is needed today. Tomorrow's problems will have tomorrow's context.
| Impulse | Rule Violated | Response |
|---|---|---|
| "Let me quickly refactor this while I'm here" | Rule 1 | One task, one change. Note it for later. |
| "I know how this works, I'll just write the fix" | Rule 2 | Read first. Your mental model may be wrong. |
| "This probably takes a string" | Rule 3 | Check the type. "Probably" means you don't know. |
| "I'll know it's done when it works" | Rule 4 | Define concrete criteria before starting. |
| "Let me make this extensible for future use" | Rule 5 | Build for now. Extensibility is a future task. |
| "The code around this is messy, let me clean it" | Rule 1 | Not your task. File a separate issue. |
| "I'll add some helpful logging" | Rule 1 | Was logging requested? If not, don't add it. |
Stop and re-read the rules if you catch yourself thinking:
During implementation, verify against this list:
Implementation is disciplined when:
If any of these are not met, the implementation needs revision.
After implementation is complete:
clean-ai-slop to run a corrective passsystematic-debugging to investigaterob-pike before optimizing