By tmdgusya
Enforce engineering discipline in AI-assisted development: clean LLM-generated code slop via tests, debug systematically with reproduce-first workflows, optimize performance using Rob Pike's measurement rules, apply Karpathy guidelines for precise changes, and orchestrate multi-day tasks through milestone planning, execution, and verification reviews.
npx claudepluginhub tmdgusya/engineering-discipline --plugin engineering-disciplineUse when a user's request is vague, ambiguous, or underspecified. Launches an iterative Q&A loop to resolve ambiguity while a subagent explores the codebase in parallel. Outputs a clear, well-scoped context brief so the user can plan sharply. Triggers on "I want to...", "I need...", "let's build...", "can you help me...", "we should...", or any request where the full scope isn't immediately clear.
Corrective cleanup of AI-generated code — removes LLM-specific patterns while preserving behavior. Use when the user says "clean up", "deslop", "slop", "clean AI code", or when you spot LLM-generated code smells after any generation session.
Behavioral guardrails to prevent common LLM coding mistakes — enforces surgical changes, assumption verification, and scope discipline before and during implementation. Use when implementing features, modifying code, or when you notice yourself about to make changes without reading the existing code first.
Orchestrates multi-day execution of complex tasks through milestones. Each milestone goes through plan-crafting, run-plan (worker-validator), and review-work phases with checkpoint/recovery. Triggers when the user says "long run", "start long run", "execute milestones", or "run all milestones".
Decomposes complex, multi-day tasks into optimized milestones using parallel reviewer agents (ultraplan). Spawns 5 independent reviewers that analyze the problem from different angles, then synthesizes their findings into a milestone dependency DAG. Triggers when the user says "plan milestones", "break this into milestones", "ultraplan", or when long-run harness needs milestone generation.
Use when a task's scope is clear and multi-step implementation is needed, before touching code. Triggered after clarification is complete, or when the user explicitly requests plan creation with a clear prompt.
Use after run-plan completes to independently verify the implementation. Reads only the plan document and inspects the codebase from scratch — information-isolated from the execution context. Produces a structured review document with PASS/FAIL verdict. Triggers when the user says "review the work", "verify the implementation", "check if the plan was executed correctly".
Rob Pike's 5 Rules of Programming — a decision framework that prevents premature optimization and enforces measurement-driven development. Use when the user says "optimize", "slow", "performance", "bottleneck", "speed up", "make faster", "too slow", or any request to improve code speed/efficiency. Also use when you notice yourself about to suggest a performance optimization without measurement data. This is a thinking discipline, not a tooling workflow.
Use when you have a written implementation plan to execute. Loads the plan, reviews critically, executes tasks in dependency order, and reports completion. Triggers when the user says "run the plan", "execute the plan", or "let's start implementing".
Review changed code for reuse opportunities, quality issues, and inefficiencies using three parallel review agents, then fix any issues found. Triggers when the user says "simplify", "clean up the code", "review the changes", or after run-plan execution when code quality verification is needed.
Use when encountering any bug, test failure, or unexpected behavior. Enforces a strict reproduce-first, root-cause-first, failing-test-first debugging workflow before fixing.
Software engineering skills from Code Complete and A Philosophy of Software Design. 20 skills across 3 agents (build, post-gate, debug). Building workflow with adaptive gates (BUILD, REVIEW, commit). Scientific debugging via debug-agent.
Share bugs, ideas, or general feedback.
Software engineering workflows with skills for planning, implementation, quality review, and structured thinking, plus a suite of specialist agents
Autonomous improvement engine for Claude Code. Runs an unbounded modify-verify-keep/discard loop against any mechanical metric. 10 subcommands: plan, debug, fix, security, ship, scenario, predict, learn, and reason.
Universal software engineering methodology: systematic debugging, safe refactoring, code review, incident response, technical debt triage, and codebase comprehension. Language-agnostic foundations for professional engineering practice.
Multi-AI orchestration pipeline with Task-based enforcement and Codex final gate
Anti-over-engineering skill for AI coding agents. Teaches your AI when to stop.