Executes written implementation plans: loads and critically reviews them, runs tasks in dependency order with parallel dispatch, separate worker-validator subagents, and verifies completion.
npx claudepluginhub tmdgusya/engineering-discipline --plugin engineering-disciplineThis skill uses the workspace's default tool permissions.
Loads a written plan document, reviews it critically, then executes tasks in dependency order using a worker-validator loop.
Executes written implementation plans: loads/reviews file, runs tasks autonomously (agentic mode) or in batches with pauses (human-in-loop), verifies, reports progress, completes branch.
Executes written implementation plans by delegating all code tasks to subagents with dependency-aware dispatch, context preservation, review checkpoints, and lower-cost models.
Executes pre-written implementation plans: critically reviews, follows bite-sized steps exactly, runs verifications, tracks progress with checkpoints, uses git worktrees, stops on blockers.
Share bugs, ideas, or general feedback.
Loads a written plan document, reviews it critically, then executes tasks in dependency order using a worker-validator loop.
Do not follow plans blindly. If the plan has issues, flag them before executing. But if the plan is clear, execute it faithfully.
plan-crafting skillplan-crafting first)clarification skill)Before reviewing the plan, discover what the project offers for verification and execution:
Verification infrastructure — read the plan's Verification Strategy header. If present, use it. If absent, run the same discovery as plan-crafting:
Available agents and skills — scan for project-level agents (.claude/agents/), plugin agents, and project skills (.claude/skills/) that workers can leverage. If the plan references specific agents in task steps, verify they exist before execution begins.
If an agent/skill is referenced but missing: Notify the user. Do not block execution — workers can execute steps directly without the agent.
Each task runs through a Compliance Check → Worker Implementation → Validator Review cycle. If the validator rejects, feedback is sent back to the worker for re-implementation.
digraph task_loop {
rankdir=TB;
"Compliance check (subagent)" [shape=box];
"Worker implements (subagent)" [shape=box];
"Validator reviews (subagent)" [shape=box];
"Pass?" [shape=diamond];
"Task complete" [shape=doublecircle];
"Compliance check (subagent)" -> "Worker implements (subagent)";
"Worker implements (subagent)" -> "Validator reviews (subagent)";
"Validator reviews (subagent)" -> "Pass?";
"Pass?" -> "Task complete" [label="Pass"];
"Pass?" -> "Worker implements (subagent)" [label="Fail\nfeedback delivered"];
}
For each task, perform the following cycle:
2-1. Compliance Check (subagent)
Before starting a task, verify that the current task aligns with the plan:
If issues found: notify the user and resolve before proceeding.
2-2. Worker Implementation (subagent, via Agent tool)
Dispatch a subagent (worker) via the Agent tool to execute the task's steps:
2-3. Validator Review (subagent, via Agent tool — information-isolated)
Dispatch a separate subagent (validator) via the Agent tool. The validator operates under an information barrier — it knows only what the task was supposed to accomplish, not what the worker did or how.
Constructing the validator prompt:
The main agent must NOT compose the validator prompt freely. Use the fixed template below, filling only the four designated fields by copying verbatim from the plan document. Do not paraphrase, summarize, or add context beyond what the template specifies.
You are an independent validator. You have no knowledge of how this task
was implemented. Your job is to judge whether the codebase currently meets
the goal described below, by reading files and running tests yourself.
## Task Goal
{TASK_GOAL}
— Copy the task's goal statement verbatim from the plan.
## Acceptance Criteria
{ACCEPTANCE_CRITERIA}
— Copy the task's acceptance criteria verbatim from the plan.
Each criterion is a concrete, verifiable condition.
## Files To Inspect
{FILE_LIST}
— Copy the list of files this task is expected to create or modify,
as listed in the plan.
## Test Commands
{TEST_COMMANDS}
— Copy any test execution commands or verification steps
specified in the plan for this task.
## Your Review Process
1. Read each file in the file list directly from disk.
2. For each acceptance criterion, determine whether it is met
based on what you see in the code. Record PASS or FAIL per criterion.
3. Run every test command listed above. Record results.
4. Run the full test suite to check for regressions.
5. Check for residual issues: placeholder code (TODO, FIXME, stubs),
debug code (console.log, print statements), commented-out blocks.
## Your Output
Report your verdict as PASS or FAIL.
- If PASS: confirm which criteria were verified and which tests passed.
- If FAIL: list exactly which criteria failed and why, with file paths
and line numbers. Do not suggest fixes — only describe what is wrong.
What must NOT appear in the validator prompt:
Why a fixed template: The main agent has seen the worker's output and may unconsciously frame the validator's task in terms of what the worker did. A fixed template eliminates this channel — the validator sees only the plan's original specification, not the main agent's post-worker understanding.
Validation results:
Retry limit: If the same task fails 3 consecutive times, report the situation to the user and request intervention.
Parallel Execution Rules (Hard Gate #4):
Tasks that can run in parallel must be dispatched in parallel. Grouping them sequentially is prohibited.
Parallel execution conditions (all must be met):
When running in parallel:
Sequential execution required for:
After all tasks are complete (including the Final Verification Task if present), run the highest-level verification as an independent gate:
If all pass: Report success summary to the user.
If E2E verification fails — Failure Response Protocol:
The E2E gate failure means individual tasks passed their validators but the system as a whole doesn't work. This is an integration problem, not a task-level problem.
Diagnose (attempt 1):
Re-diagnose (attempt 2):
Escalate to user (after 2 failed attempts):
Stop executing immediately and ask the user for help when:
Ask for clarification rather than guessing.
After each task completion, verify:
| Anti-Pattern | Why It Fails |
|---|---|
| Executing without reviewing the plan | Plan errors propagate into implementation |
| Skipping verification steps | Errors accumulate, debugging cost increases later |
| Guessing when blocked | Spec drift, rework required |
| Running non-parallelizable tasks in parallel | File conflicts, dependency tangles |
| Running parallelizable tasks sequentially | Wasted time, unnecessary execution delay |
| Main agent performing worker/validator roles inline | Defeats independent verification; confirmation bias |
| Passing worker output to the validator | Validator anchors on worker's framing instead of judging independently |
| Composing the validator prompt freely instead of using the fixed template | Main agent unconsciously leaks worker context through word choice and framing |
| Paraphrasing the plan instead of copying verbatim into the template | Paraphrasing filters through the main agent's post-worker understanding, introducing bias |
| Starting implementation on main/master without explicit user consent | Prohibited without explicit approval |
| Skipping the E2E gate because individual tasks all passed | Task-level pass ≠ system-level pass; integration bugs hide between tasks |
| Retrying E2E failures more than twice without user escalation | Wastes budget; user may have context about the root cause |
After plan execution is complete:
review-work skillclarification skill to resolveplan-crafting skill to reviseThis skill itself does not invoke the next skill. It ends by reporting execution results and letting the user choose the next step.