Launch multiple sub-agents in parallel to execute tasks across files or targets with intelligent model selection and quality-focused prompting
Executes identical tasks across multiple files or targets in parallel using intelligent model selection and quality-focused prompting.
/plugin marketplace add NeoLabHQ/context-engineering-kit/plugin install sadd@context-engineering-kitTask description [--files "file1.ts,file2.ts,..."] [--targets "target1,target2,..."] [--model opus|sonnet|haiku] [--output <path>]Common use cases:
Extract targets from the command arguments:
Input patterns:
1. --files "src/a.ts,src/b.ts,src/c.ts" --> File-based targets
2. --targets "UserService,OrderService" --> Named targets
3. Infer from task description --> Parse file paths from task
Parsing rules:
--files provided: Split by comma, validate each path exists--targets provided: Split by comma, use as-isBefore dispatching, analyze the task systematically:
Let me analyze this parallel task step by step to determine the optimal configuration:
1. **Task Type Identification**
"What type of work is being requested across all targets?"
- Code transformation / refactoring
- Code analysis / review
- Documentation generation
- Test generation
- Data transformation
- Simple lookup / extraction
2. **Per-Target Complexity Assessment**
"How complex is the work for EACH individual target?"
- High: Requires deep understanding, architecture decisions, novel solutions
- Medium: Standard patterns, moderate reasoning, clear approach
- Low: Simple transformations, mechanical changes, well-defined rules
3. **Per-Target Output Size**
"How extensive is each target's expected output?"
- Large: Multi-section documents, comprehensive analysis
- Medium: Focused deliverable, single component
- Small: Brief result, minor change
4. **Independence Check**
"Are the targets truly independent?"
- Yes: No shared state, no cross-dependencies, order doesn't matter
- Partial: Some shared context needed, but can run in parallel
- No: Dependencies exist --> Use sequential execution instead
Verify tasks are truly independent before proceeding:
| Check | Question | If NO |
|---|---|---|
| File Independence | Do targets share files? | Cannot parallelize - files conflict |
| State Independence | Do tasks modify shared state? | Cannot parallelize - race conditions |
| Order Independence | Does execution order matter? | Cannot parallelize - sequencing required |
| Output Independence | Does any target read another's output? | Cannot parallelize - data dependency |
Independence Checklist:
If ANY check fails: STOP and inform user why parallelization is unsafe. Recommend /launch-sub-agent for sequential execution.
Select the optimal model and specialized agent based on task analysis. Same configuration for all parallel agents (ensures consistent quality):
| Task Profile | Recommended Model | Rationale |
|---|---|---|
| Complex per-target (architecture, design) | opus | Maximum reasoning capability per task |
| Specialized domain (code review, security) | opus | Domain expertise matters |
| Medium complexity, large output | sonnet | Good capability, cost-efficient for volume |
| Simple transformations (rename, format) | haiku | Fast, cheap, sufficient for mechanical tasks |
| Default (when uncertain) | opus | Optimize for quality over cost |
Decision Tree:
Is EACH target's task COMPLEX (architecture, novel problem, critical decision)?
|
+-- YES --> Use Opus for ALL agents
|
+-- NO --> Is task SIMPLE and MECHANICAL (rename, format, extract)?
|
+-- YES --> Use Haiku for ALL agents
|
+-- NO --> Is output LARGE but task not complex?
|
+-- YES --> Use Sonnet for ALL agents
|
+-- NO --> Use Opus for ALL agents (default)
If the task matches a specialized domain, include the relevant agent prompt in ALL parallel agents. Specialized agents provide domain-specific best practices that improve output quality.
Specialized Agents: Specialized agent list depends on project and plugins that are loaded.
Decision: Use specialized agent when:
Skip specialized agent when:
Build identical prompt structure for each target, customized only with target-specific details:
## Reasoning Approach
Let's think step by step.
Before taking any action, think through the problem systematically:
1. "Let me first understand what is being asked for this specific target..."
- What is the core objective?
- What are the explicit requirements?
- What constraints must I respect?
2. "Let me analyze this specific target..."
- What is the current state?
- What patterns or conventions exist?
- What context is relevant?
3. "Let me plan my approach..."
- What are the concrete steps?
- What could go wrong?
- Is there a simpler approach?
Work through each step explicitly before implementing.
<task>
{Task description from $ARGUMENTS}
</task>
<target>
{Specific target for this agent: file path, component name, etc.}
</target>
<constraints>
- Work ONLY on the specified target
- Do NOT modify other files unless explicitly required
- Follow existing patterns in the target
- {Any additional constraints from context}
</constraints>
<output>
{Expected deliverable location and format}
</output>
## Self-Critique Verification (MANDATORY)
Before completing, verify your work for this target. Do not submit unverified changes.
### 1. Generate Verification Questions
Create questions specific to your task and target. There examples of questions:
| # | Question | Why It Matters |
|---|----------|----------------|
| 1 | Did I achieve the stated objective for this target? | Incomplete work = failed task |
| 2 | Are my changes consistent with patterns in this file/codebase? | Inconsistency creates technical debt |
| 3 | Did I introduce any regressions or break existing functionality? | Breaking changes are unacceptable |
| 4 | Are edge cases and error scenarios handled appropriately? | Edge cases cause production issues |
| 5 | Is my output clear, well-formatted, and ready for review? | Unclear output reduces value |
### 2. Answer Each Question with Evidence
For each question, provide specific evidence from your work:
[Q1] Objective Achievement:
- Required: [what was asked]
- Delivered: [what you did]
- Gap analysis: [any gaps]
[Q2] Pattern Consistency:
- Existing pattern: [observed pattern]
- My implementation: [how I followed it]
- Deviations: [any intentional deviations and why]
[Q3] Regression Check:
- Functions affected: [list]
- Tests that would catch issues: [if known]
- Confidence level: [HIGH/MEDIUM/LOW]
[Q4] Edge Cases:
- Edge case 1: [scenario] - [HANDLED/NOTED]
- Edge case 2: [scenario] - [HANDLED/NOTED]
[Q5] Output Quality:
- Well-organized: [YES/NO]
- Self-documenting: [YES/NO]
- Ready for PR: [YES/NO]
### 3. Fix Issues Before Submitting
If ANY verification reveals a gap:
1. **FIX** - Address the specific issue
2. **RE-VERIFY** - Confirm the fix resolves the issue
3. **DOCUMENT** - Note what was changed and why
CRITICAL: Do not submit until ALL verification questions have satisfactory answers.
Launch all sub-agents simultaneously using the Task tool.
CRITICAL: Parallel Dispatch Pattern
Launch ALL agents in a SINGLE response. Do NOT wait for one agent to complete before starting another:
## Dispatching 3 parallel tasks
[Task 1]
Use Task tool:
description: "Parallel: simplify error handling in src/services/user.ts"
prompt: [CoT prefix + task body for user.ts + critique suffix]
model: sonnet
[Task 2]
Use Task tool:
description: "Parallel: simplify error handling in src/services/order.ts"
prompt: [CoT prefix + task body for order.ts + critique suffix]
model: sonnet
[Task 3]
Use Task tool:
description: "Parallel: simplify error handling in src/services/payment.ts"
prompt: [CoT prefix + task body for payment.ts + critique suffix]
model: sonnet
[All 3 tasks launched simultaneously - results collected when all complete]
Parallelization Guidelines:
Context Isolation (IMPORTANT):
After all agents complete, aggregate results:
## Parallel Execution Summary
### Configuration
- **Task:** {task description}
- **Model:** {selected model}
- **Targets:** {count} items
### Results
| Target | Model | Status | Summary |
|--------|-------|--------|---------|
| {target_1} | {model} | SUCCESS/FAILED | {brief outcome} |
| {target_2} | {model} | SUCCESS/FAILED | {brief outcome} |
| ... | ... | ... | ... |
### Overall Assessment
- **Completed:** {X}/{total}
- **Failed:** {Y}/{total}
- **Common patterns:** {any patterns across results}
### Verification Summary
{Aggregate self-critique results - any common gaps?}
### Files Modified
- {list of all modified files}
### Next Steps
{If any failures, suggest remediation}
Failure Handling:
/launch-sub-agentInput:
/do-in-parallel "Simplify error handling to use early returns instead of nested if-else" \
--files "src/services/user.ts,src/services/order.ts,src/services/payment.ts"
Analysis:
Model Selection: Sonnet (pattern-based, medium complexity)
Dispatch: 3 parallel agents, one per file
Result:
## Parallel Execution Summary
### Configuration
- **Task:** Simplify error handling to use early returns
- **Model:** Sonnet
- **Targets:** 3 files
### Results
| Target | Model | Status | Summary |
|--------|-------|--------|---------|
| src/services/user.ts | sonnet | SUCCESS | Converted 4 nested if-else blocks to early returns |
| src/services/order.ts | sonnet | SUCCESS | Converted 6 nested if-else blocks to early returns |
| src/services/payment.ts | sonnet | SUCCESS | Converted 3 nested if-else blocks to early returns |
### Overall Assessment
- **Completed:** 3/3
- **Common patterns:** All files followed consistent early return pattern
Input:
/do-in-parallel "Generate JSDoc documentation for all public methods" \
--files "src/api/users.ts,src/api/products.ts,src/api/orders.ts,src/api/auth.ts"
Analysis:
Model Selection: Haiku (mechanical, well-defined rules)
Dispatch: 4 parallel agents
Input:
/do-in-parallel "Analyze for potential SQL injection vulnerabilities and suggest fixes" \
--files "src/db/queries.ts,src/db/migrations.ts,src/api/search.ts"
Analysis:
Model Selection: Opus (security-critical, requires deep analysis)
Dispatch: 3 parallel agents
Input:
/do-in-parallel "Generate unit tests achieving 80% coverage" \
--targets "UserService,OrderService,PaymentService,NotificationService"
Analysis:
Model Selection: Sonnet (pattern-based, extensive output)
Dispatch: 4 parallel agents
Input:
/do-in-parallel "Apply consistent logging format to src/handlers/user.ts, src/handlers/order.ts, and src/handlers/product.ts"
Analysis:
Model Selection: Haiku (simple, mechanical)
Dispatch: 3 parallel agents
| Scenario | Model | Reason |
|---|---|---|
| Security analysis | Opus | Critical reasoning required |
| Architecture decisions | Opus | Quality over speed |
| Simple refactoring | Haiku | Fast, sufficient |
| Documentation generation | Haiku | Mechanical task |
| Code review per file | Sonnet | Balanced capability |
| Test generation | Sonnet | Extensive but patterned |
| Failure Type | Description | Recovery Action |
|---|---|---|
| Recoverable | Sub-agent made a mistake but approach is sound | Retry step with corrected prompt (max 1 retry) |
| Approach Failure | The approach for this step is wrong | Escalate to user with options |
| Foundation Issue | Previous step output is insufficient | May need to revisit earlier step |
Critical Rules: