Use when executing implementation plans with independent tasks in the current session or facing 3+ independent issues that can be investigated without shared state or dependencies - dispatches fresh subagent for each task with code review between tasks, enabling fast iteration with quality gates
/plugin marketplace add NeoLabHQ/context-engineering-kit/plugin install sadd@context-engineering-kitThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Create and execute plan by dispatching fresh subagent per task or issue, with code and output review after each or batch of tasks.
Core principle: Fresh subagent per task + review between or after tasks = high quality, fast iteration.
Executing Plans through agents:
When you have a tasks or issues that are related to each other, and they need to be executed in order, investigating or modifying them sequentially is the best way to go.
Dispatch one agent per task or issue. Let it work sequentially. Review the output and code after each task or issue.
When to use:
When you have multiple unrelated tasks or issues (different files, different subsystems, different bugs), investigatin or modifying them sequentially wastes time. Each task or investigation is independent and can happen in parallel.
Dispatch one agent per independent problem domain. Let them work concurrently.
When to use:
Read plan file, create TodoWrite with all tasks.
For each task:
Dispatch fresh subagent:
Task tool (general-purpose):
description: "Implement Task N: [task name]"
prompt: |
You are implementing Task N from [plan-file].
Read that task carefully. Your job is to:
1. Implement exactly what the task specifies
2. Write tests (following TDD if task says to)
3. Verify implementation works
4. Commit your work
5. Report back
Work from: [directory]
Report: What you implemented, what you tested, test results, files changed, any issues
Subagent reports back with summary of work.
Dispatch code-reviewer subagent:
Task tool (superpowers:code-reviewer):
Use template at requesting-code-review/code-reviewer.md
WHAT_WAS_IMPLEMENTED: [from subagent's report]
PLAN_OR_REQUIREMENTS: Task N from [plan-file]
BASE_SHA: [commit before task]
HEAD_SHA: [current commit]
DESCRIPTION: [task summary]
Code reviewer returns: Strengths, Issues (Critical/Important/Minor), Assessment
If issues found:
Dispatch follow-up subagent if needed:
"Fix issues from code review: [list issues]"
After all tasks complete, dispatch final code-reviewer:
After final review passes:
You: I'm using Subagent-Driven Development to execute this plan.
[Load plan, create TodoWrite]
Task 1: Hook installation script
[Dispatch implementation subagent]
Subagent: Implemented install-hook with tests, 5/5 passing
[Get git SHAs, dispatch code-reviewer]
Reviewer: Strengths: Good test coverage. Issues: None. Ready.
[Mark Task 1 complete]
Task 2: Recovery modes
[Dispatch implementation subagent]
Subagent: Added verify/repair, 8/8 tests passing
[Dispatch code-reviewer]
Reviewer: Strengths: Solid. Issues (Important): Missing progress reporting
[Dispatch fix subagent]
Fix subagent: Added progress every 100 conversations
[Verify fix, mark Task 2 complete]
...
[After all tasks]
[Dispatch final code-reviewer]
Final reviewer: All requirements met, ready to merge
Done!
Never:
If subagent fails task:
Load plan, review critically, execute tasks in batches, report for review between batches.
Core principle: Batch execution with checkpoints for architect review.
Announce at start: "I'm using the executing-plans skill to implement this plan."
Default: First 3 tasks
For each task:
When batch complete:
Based on feedback:
After all tasks complete and verified:
STOP executing immediately when:
Ask for clarification rather than guessing.
Return to Review (Step 1) when:
Don't force through blockers - stop and ask.
Special case of parallel execution, when you have multiple unrelated failures that can be investigated without shared state or dependencies.
Group failures by what's broken:
Each domain is independent - fixing tool approval doesn't affect abort tests.
Each agent gets:
// In Claude Code / AI environment
Task("Fix agent-tool-abort.test.ts failures")
Task("Fix batch-completion-behavior.test.ts failures")
Task("Fix tool-approval-race-conditions.test.ts failures")
// All three run concurrently
When agents return:
Good agent prompts are:
Fix the 3 failing tests in src/agents/agent-tool-abort.test.ts:
1. "should abort tool with partial output capture" - expects 'interrupted at' in message
2. "should handle mixed completed and aborted tools" - fast tool aborted instead of completed
3. "should properly track pendingToolCount" - expects 3 results but gets 0
These are timing/race condition issues. Your task:
1. Read the test file and understand what each test verifies
2. Identify root cause - timing issues or actual bugs?
3. Fix by:
- Replacing arbitrary timeouts with event-based waiting
- Fixing bugs in abort implementation if found
- Adjusting test expectations if testing changed behavior
Do NOT just increase timeouts - find the real issue.
Return: Summary of what you found and what you fixed.
❌ Too broad: "Fix all the tests" - agent gets lost ✅ Specific: "Fix agent-tool-abort.test.ts" - focused scope
❌ No context: "Fix the race condition" - agent doesn't know where ✅ Context: Paste the error messages and test names
❌ No constraints: Agent might refactor everything ✅ Constraints: "Do NOT change production code" or "Fix tests only"
❌ Vague output: "Fix it" - you don't know what changed ✅ Specific: "Return summary of root cause and changes"
Related failures: Fixing one might fix others - investigate together first Need full context: Understanding requires seeing entire system Exploratory debugging: You don't know what's broken yet Shared state: Agents would interfere (editing same files, using same resources)
Scenario: 6 test failures across 3 files after major refactoring
Failures:
Decision: Independent domains - abort logic separate from batch completion separate from race conditions
Dispatch:
Agent 1 → Fix agent-tool-abort.test.ts
Agent 2 → Fix batch-completion-behavior.test.ts
Agent 3 → Fix tool-approval-race-conditions.test.ts
Results:
Integration: All fixes independent, no conflicts, full suite green
Time saved: 3 problems solved in parallel vs sequentially
After agents return:
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.