Execute plan in parallel using git worktrees and multiple Claude sessions
Execute parallel development tasks using pre-created git worktrees and autonomous Haiku agents. Use this after planning to implement multiple features simultaneously with 85% cost savings and 2x speedup.
/plugin marketplace add Shakes-tzd/contextune/plugin install contextune@ContextuneYou are executing an automated parallel development workflow with optimized parallel setup.
Contextune Integration: This command can be triggered via /contextune:parallel:execute or natural language like "work on these tasks in parallel", "parallelize this work".
Key Innovation: Worktrees are pre-created by script, agents focus purely on implementation, enabling deterministic parallel execution.
Setup Performance (Measured Actuals):
Note: Times shown are actual measurements from completed workflows, not estimates.
Scaling: Setup time is O(1) instead of O(n) โ constant regardless of task count!
Check required tools:
# Verify git and worktree support
git --version && git worktree list
Requirements:
If validation fails:
Problem: Without auto-approval, you must approve EVERY git command from EVERY parallel agent individually, negating all parallelism benefits!
Solution: Pre-approve safe git commands using Claude Code's IAM permission system.
Quick Setup:
Run in Claude Code: /permissions
Add these allow rules:
Bash(git add:*)
Bash(git commit:*)
Bash(git push:*)
Set permission mode: "defaultMode": "acceptEdits" in settings
Done! Agents work autonomously ๐
๐ Complete guide: See docs/AUTO_APPROVAL_CONFIGURATION.md for:
Without auto-approval:
Agent 1: Waiting for approval... (blocked)
Agent 2: Waiting for approval... (blocked)
Agent 3: Waiting for approval... (blocked)
โ You: Must approve each one individually (bottleneck!)
With auto-approval:
Agent 1: Implementing... โ
Testing... โ
Committing... โ
Agent 2: Implementing... โ
Testing... โ
Committing... โ
Agent 3: Implementing... โ
Testing... โ
Committing... โ
โ True parallelism! ๐
Generate and run the setup script:
# Create scripts directory if needed
mkdir -p .parallel/scripts
# Generate setup script
cat > .parallel/scripts/setup_worktrees.sh <<'WORKTREE_SCRIPT'
#!/usr/bin/env bash
set -euo pipefail
# Find plan.yaml
PLAN_FILE=".parallel/plans/active/plan.yaml"
if [ ! -f "$PLAN_FILE" ]; then
echo "Error: plan.yaml not found at $PLAN_FILE"
exit 1
fi
# Extract task IDs
if command -v yq &> /dev/null; then
TASK_IDS=$(yq '.tasks[].id' "$PLAN_FILE")
else
TASK_IDS=$(grep -A 100 "^tasks:" "$PLAN_FILE" | grep " - id:" | sed 's/.*id: *"\([^"]*\)".*/\1/')
fi
echo "Creating worktrees for $(echo "$TASK_IDS" | wc -l | tr -d ' ') tasks..."
# Create worktrees in parallel
echo "$TASK_IDS" | while read task_id; do
branch="feature/$task_id"
worktree="worktrees/$task_id"
if [ -d "$worktree" ]; then
echo "โ ๏ธ Worktree exists: $task_id"
elif git show-ref --verify --quiet refs/heads/$branch; then
git worktree add "$worktree" "$branch" 2>/dev/null && echo "โ
Created: $task_id (existing branch)"
else
git worktree add "$worktree" -b "$branch" 2>&1 | grep -v "Preparing" && echo "โ
Created: $task_id"
fi
done
echo ""
echo "โ
Setup complete! Active worktrees:"
git worktree list | grep "worktrees/"
WORKTREE_SCRIPT
chmod +x .parallel/scripts/setup_worktrees.sh
# Run the script
./.parallel/scripts/setup_worktrees.sh
What this does:
NEW WORKFLOW: Direct file loading (plan.yaml created by /ctx:plan)
Extraction is now deprecated - /ctx:plan creates files directly, so we just load them.
Check if plan.yaml exists (created by new /ctx:plan):
if [ -f .parallel/plans/plan.yaml ]; then
echo "โ
Found plan.yaml (created by /ctx:plan)"
cat .parallel/plans/plan.yaml
else
echo "โ ๏ธ No plan.yaml found, trying extraction (deprecated)"
fi
Possible outcomes:
A) Plan exists (EXPECTED):
โ
Found plan.yaml (created by /ctx:plan)
โ Files were created by new /ctx:plan workflow
โ Continue to Step 3 (validate and execute)
B) Plan doesn't exist (FALLBACK NEEDED):
โ ๏ธ No plan.yaml found, trying extraction (deprecated)
โ May be old plan from before refactor โ Continue to Step 2 (try extraction)
โ ๏ธ DEPRECATED: This path is only for plans created before the refactor.
New plans created with /ctx:plan skip this step entirely.
Try to extract plan from conversation transcript:
"${CLAUDE_PLUGIN_ROOT}/scripts/extract-current-plan.sh"
Possible outcomes:
A) Extraction succeeds:
โ
Created .parallel/plans/plan.yaml
โ
Extraction complete!
โ Continue to Step 3 (load the extracted plan)
B) Extraction fails:
โ Error: No plan found in transcript
โ No plan files AND no plan in conversation โ Continue to Step 4 (ask user to create plan)
Read and validate the plan.yaml file:
# Read the plan
cat .parallel/plans/plan.yaml
Validate the plan has required structure:
metadata: section with name, statustasks: array with at least one taskFor each task in the plan:
Context Optimization:
/ctx:plan โ Already in context (0 tokens!)Plan loaded and validated โ Continue to Phase 2 (setup worktrees)
Only reached if both Step 1 and Step 2 failed
Tell the user:
โ No plan found in conversation or filesystem.
To create a plan, run: /ctx:plan
Then re-run /ctx:execute to start parallel execution.
Do NOT proceed to Phase 2 without a valid plan.
Plan validation:
Status filtering:
If validation fails:
/contextune:parallel:plan to create proper planIMPORTANT: This is where the optimization happens! We use specialized Haiku agents for 85% cost savings and 2x speedup.
Three-Tier Architecture in Action:
Key architectural decision: Worktrees are pre-created by script before agents spawn!
Why this matters:
Performance:
Old approach (agents create own worktrees):
5 tasks: 73s total (8s setup per agent in parallel)
New approach (script pre-creates worktrees):
5 tasks: 5s setup + work time (deterministic!)
Time saved: 68s (93% faster setup)
Observability via Git + PRs:
git worktree list, gh pr list, git historyFor each independent task in the plan:
Spawn a parallel-task-executor Haiku agent. Each agent receives:
IMPORTANT: If using Contextune v0.4.0+ with context-grounded research:
โ All research was ALREADY done during planning!
โ The specifications you receive are COMPLETE and GROUNDED:
โ Your job is EXECUTION ONLY:
Why this matters: The planning phase (Sonnet) already did comprehensive parallel research. You (Haiku) are optimized for fast, accurate execution of well-defined tasks. Trust the plan!
Cost savings:
You are Subagent working on: {task.name}
**Task Reference:** .parallel/plans/tasks/{task.id}.md (Markdown + YAML frontmatter)
**Plan Reference:** .parallel/plans/plan.yaml
**IMPORTANT:** Your task file is in Markdown with YAML frontmatter containing your complete specification.
**Quick Reference:**
- Priority: {task.priority}
- Dependencies: {task.dependencies}
**Your Complete Task Specification:**
Read your task file for all details:
```bash
cat .parallel/plans/tasks/{task.id}.md
The file contains:
Step 0: Mark Task as In Progress
Update the task status in plan.yaml to track that you're starting work:
# Update task status to in_progress
TASK_FILE=".parallel/plans/active/tasks/{task.id}.md"
# Use sed to update status field in YAML frontmatter
sed -i.bak 's/^status: pending$/status: in_progress/' "$TASK_FILE"
# Verify the update
echo "โ
Task status updated to in_progress"
Step 1: Navigate to Your Worktree
Your worktree and branch were already created by the setup script!
# Navigate to your worktree
cd worktrees/{task.id}
# Verify you're in the right place
echo "Current branch: $(git branch --show-current)"
echo "Expected branch: feature/{task.id}"
Step 2: Setup Development Environment
# Copy environment files if they exist
cp ../../.env .env 2>/dev/null || true
cp ../../.env.local .env.local 2>/dev/null || true
# Install dependencies (adjust based on project type)
{project_setup_commands}
# Example for Node.js:
# npm install
# Example for Python:
# uv sync
# Example for Rust:
# cargo build
Verify setup:
# Run a quick test to ensure environment works
{project_verify_command}
# Example: npm run typecheck
# Example: uv run pytest --collect-only
# Log that setup is complete
echo "โ
Environment setup complete, starting implementation..."
Read your task file for complete details:
cat .parallel/plans/tasks/{task.id}.md
Your task file is in Markdown with YAML frontmatter:
Follow the implementation approach specified in the task:
Detailed implementation steps:
{Generate steps based on task.objective, task.files, and task.implementation}
๐ฏ EXECUTION-ONLY Guidelines (v0.4.0):
DO (Execute as specified):
DON'T (No research, no decisions):
IF UNCLEAR:
Remember: All research was done. All decisions were made. You execute!
Commit messages should follow:
{type}: {brief description}
{detailed explanation if needed}
Implements: {task.id}
๐ค Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
Run all relevant tests:
# Unit tests
{unit_test_command}
# Integration tests (if applicable)
{integration_test_command}
# Linting
{lint_command}
# Type checking
{typecheck_command}
All tests MUST pass before pushing!
If tests fail:
Push your branch:
Use git push (single command is OK per DRY strategy):
git push origin "feature/{task.id}"
Note: If you need commit + push workflow, use "${CLAUDE_PLUGIN_ROOT}/scripts/commit_and_push.sh" instead.
Log completion:
echo "โ
Task {task.id} completed and pushed!"
echo ""
echo "Branch: feature/{task.id}"
echo "Commits: $(git log --oneline origin/main..HEAD | wc -l)"
echo ""
echo "Files changed:"
git diff --name-only origin/main..HEAD
echo ""
echo "Ready for PR creation!"
Step 1: Mark Task as Completed
Update the task status to reflect successful completion:
# Update task status to completed
TASK_FILE=".parallel/plans/active/tasks/{task.id}.md"
# Use sed to update status field in YAML frontmatter
sed -i.bak 's/^status: in_progress$/status: completed/' "$TASK_FILE"
# Verify the update
echo "โ
Task status updated to completed"
Step 2: Return to main agent with:
โ
Task completed successfully!
**Task ID:** {task.id}
**Branch:** feature/{task.id}
**Worktree:** worktrees/{task.id}
**Tests:** All passing โ
**Status:** Ready for PR
**Summary:** {brief summary of what was implemented}
Why this architecture provides excellent observability:
Task Files (.parallel/plans/active/tasks/task-N.md):
Git Branches:
Git Worktrees:
git worktree listPull Requests:
Observability Commands:
# See task status
grep "^status:" .parallel/plans/active/tasks/*.md
# See all active parallel work
git worktree list
git branch | grep feature/task
# See progress on specific task
cat .parallel/plans/active/tasks/task-0.md
cd worktrees/task-0
git log --oneline
# See what changed in a task
cd worktrees/task-0
git diff origin/main..HEAD
# See PRs
gh pr list --head feature/task-0
# See all completed tasks
grep "status: completed" .parallel/plans/active/tasks/*.md
Benefits:
If worktree navigation fails:
git worktree listIf tests fail:
TASK_FILE=".parallel/plans/active/tasks/{task.id}.md"
sed -i.bak 's/^status: .*$/status: blocked/' "$TASK_FILE"
If environment setup fails:
TASK_FILE=".parallel/plans/active/tasks/{task.id}.md"
sed -i.bak 's/^status: .*$/status: blocked/' "$TASK_FILE"
If implementation is unclear:
TASK_FILE=".parallel/plans/active/tasks/{task.id}.md"
sed -i.bak 's/^status: .*$/status: blocked/' "$TASK_FILE"
End of subagent instructions.
### Spawning Haiku Agents (Implementation)
**Use the Task tool with `parallel-task-executor` agent:**
For each task in the YAML plan, create a Task tool invocation with:
- `description`: "{task.name}"
- `prompt`: The complete subagent instructions template above (filled with task-specific YAML values)
- `subagent_type`: "contextune:parallel-task-executor" (Haiku agent)
**Load task data with context optimization:**
```python
import yaml
# Step 1: Read plan.yaml index
with open('.parallel/plans/plan.yaml') as f:
plan = yaml.safe_load(f)
# Step 2: For each task in index
for task_ref in plan['tasks']:
task_id = task_ref['id']
task_name = task_ref['name'] # Name from index!
task_file = task_ref['file']
task_priority = task_ref['priority']
task_dependencies = task_ref.get('dependencies', [])
# Step 3: Context optimization decision
# Question: Is task file already in context?
# Answer: YES if created in this session, NO if reading from disk
# If NOT in context โ read task file
# If IN context โ skip read, use existing context!
# For now, read task file when spawning agent
# (Haiku will use it directly for GitHub issue creation)
with open(f'.parallel/plans/{task_file}') as f:
task_content = f.read()
# Fill template with data from INDEX (not full task file!)
# Haiku agent will read full task file for implementation details
prompt = subagent_template.format(
task_id=task_id,
task_name=task_name, # From index!
task_priority=task_priority, # From index!
task_dependencies=', '.join(task_dependencies) # From index!
)
# Spawn agent with minimal prompt
# Agent reads tasks/task-N.md for complete spec
Task(
description=task_name,
prompt=prompt,
subagent_type="contextune:parallel-task-executor"
)
Context Optimization Benefits:
CRITICAL: Spawn ALL agents in a SINGLE response using multiple Task tool invocations. This ensures parallel execution from the start.
Same Session (Plan just created):
1. User runs /contextune:parallel:plan
2. Planning agent creates:
- plan.yaml (in context)
- tasks/task-0.md (in context)
- tasks/task-1.md (in context)
- tasks/task-2.md (in context)
3. User runs /contextune:parallel:execute
4. Execution agent:
- Reads plan.yaml (~1K tokens)
- Tasks ALREADY in context (0 tokens!)
- Total: 1K tokens โ
Savings: Massive! No re-reading task files.
New Session (Reading from disk):
1. User runs /contextune:parallel:execute (new session)
2. Execution agent:
- Reads plan.yaml index (~1K tokens)
- Sees task-0, task-1, task-2 in index
- Reads task-0.md when spawning agent (~3K tokens)
- Reads task-1.md when spawning agent (~3K tokens)
- Reads task-2.md when spawning agent (~3K tokens)
- Total: ~10K tokens
Still optimized: Only reads what's needed, when it's needed.
Key Insight: plan.yaml acts as lightweight index/TOC. Model decides when to read full task files based on context availability.
Cost Tracking: Each Haiku agent costs ~$0.04 per task execution (vs $0.27 Sonnet - 85% savings!)
For 5 parallel tasks:
Example for 3 tasks:
[Single response with 3 Task tool calls using parallel-task-executor agent]
Task 1: Implement authentication (Haiku agent - $0.04)
Task 2: Build dashboard UI (Haiku agent - $0.04)
Task 3: Add analytics tracking (Haiku agent - $0.04)
Total cost: $0.12 (vs $0.81 Sonnet - 85% savings!)
All 3 Haiku agents start simultaneously in their pre-created worktrees! โก
While subagents are working:
Users can check progress at any time with:
/contextune:parallel:status
This will show:
Main agent responsibilities during monitoring:
As each subagent completes:
Verify completion:
Review changes:
# Check task status
grep "^status:" .parallel/plans/active/tasks/task-*.md
# Switch to completed worktree
cd worktrees/task-0
# Review diff
git diff origin/main..HEAD
Prepare for merge:
Generate and run the PR creation script:
# Generate PR creation script
cat > .parallel/scripts/create_prs.sh <<'PR_SCRIPT'
#!/usr/bin/env bash
set -euo pipefail
BASE_BRANCH="${1:-main}"
TASKS_DIR=".parallel/plans/active/tasks"
echo "Creating PRs for completed tasks..."
# Find completed tasks
for task_file in "$TASKS_DIR"/task-*.md; do
[ -f "$task_file" ] || continue
status=$(grep "^status:" "$task_file" | head -1 | awk '{print $2}')
[ "$status" = "completed" ] || continue
task_id=$(basename "$task_file" .md)
branch="feature/$task_id"
title=$(grep "^# " "$task_file" | head -1 | sed 's/^# //')
labels=$(awk '/^labels:/,/^[a-z]/ {if ($0 ~ /^\s*-/) print $2}' "$task_file" | tr '\n' ',' | sed 's/,$//')
# Check if PR exists
if gh pr list --head "$branch" --json number -q '.[0].number' &>/dev/null; then
echo "โ ๏ธ PR exists for $task_id"
continue
fi
# Create PR
if [ -n "$labels" ]; then
gh pr create --base "$BASE_BRANCH" --head "$branch" --title "$title" --body-file "$task_file" --label "$labels"
else
gh pr create --base "$BASE_BRANCH" --head "$branch" --title "$title" --body-file "$task_file"
fi
echo "โ
Created PR for $task_id: $title"
done
echo ""
echo "โ
PR creation complete!"
PR_SCRIPT
chmod +x .parallel/scripts/create_prs.sh
# Run the script
./.parallel/scripts/create_prs.sh
What this does:
Alternative: Merge Directly (No Review Needed)
If you don't need PR reviews, use the merge script:
# For each completed task:
"${CLAUDE_PLUGIN_ROOT}/scripts/merge_and_cleanup.sh" task-0 "Fix missing utils module"
Script handles:
Choose based on project workflow:
IMPORTANT: If you chose direct merge (no PRs), use the merge script with smart error recovery!
For EACH completed task, use smart_execute wrapper:
# Merge task-0 with AI error recovery
"${CLAUDE_PLUGIN_ROOT}/scripts/smart_execute.sh" "${CLAUDE_PLUGIN_ROOT}/scripts/merge_and_cleanup.sh" task-0 "Review CRUD endpoints"
# Merge task-1 with AI error recovery
"${CLAUDE_PLUGIN_ROOT}/scripts/smart_execute.sh" "${CLAUDE_PLUGIN_ROOT}/scripts/merge_and_cleanup.sh" task-1 "Paper CSV import"
# Merge task-2 with AI error recovery
"${CLAUDE_PLUGIN_ROOT}/scripts/smart_execute.sh" "${CLAUDE_PLUGIN_ROOT}/scripts/merge_and_cleanup.sh" task-2 "Paper listing endpoint"
# Merge task-3 with AI error recovery
"${CLAUDE_PLUGIN_ROOT}/scripts/smart_execute.sh" "${CLAUDE_PLUGIN_ROOT}/scripts/merge_and_cleanup.sh" task-3 "Database-first workflow"
Why use smart_execute.sh wrapper:
Error recovery cascade:
Script executes โ Error?
โโ NO โ Success โ
โโ YES โ Haiku analyzes error
โโ Fixed โ Success โ
(70-80% of errors)
โโ Still failing โ Copilot escalates
โโ Fixed โ Success โ
(90-95% of remaining)
โโ Still failing โ Escalate to you (Claude main session)
What the merge script does:
# For each task:
1. git checkout main (or specified branch)
2. git pull origin main (get latest)
3. git merge --no-ff feature/task-N -m "Merge branch 'feature/task-N'"
4. git push origin main
5. git branch -d feature/task-N (delete local)
6. git push origin --delete feature/task-N (delete remote)
7. Clean up worktree
NEVER use manual git commands for merging! The script:
If merge conflicts occur:
The script will detect and report conflicts. Then:
# Script stops at conflict - resolve manually
git status # See conflicted files
# Edit files to resolve conflicts
# Then:
git add <resolved-files>
git commit # Complete the merge
git push origin main
# Clean up manually
git worktree remove worktrees/task-N
git branch -d feature/task-N
Token efficiency:
After merging all tasks:
Run full test suite on main:
git checkout main
{full_test_command}
Check for integration issues:
Fix any integration bugs:
Clean up completed worktrees:
# Use the cleanup command
/contextune:parallel:cleanup
The command handles:
Archive the plan:
# Move timestamped plan to archive
mkdir -p .parallel/archive
mv .parallel/plans/20251025-042057 .parallel/archive/
# Or keep it for reference (plans are lightweight)
# Plans with status tracking are useful for future reference
Provide comprehensive summary with cost tracking:
โ
Parallel execution complete!
**Task Status Summary:**
- โ
Completed: {N} tasks
- โ ๏ธ Blocked: {M} tasks (if any)
- โณ In Progress: {P} tasks (if any)
- ๐ Pending: {Q} tasks (if any)
**Tasks Completed:** {N} / {Total}
**Actual Wall-Clock Time:** {actual_time} (measured)
**Speedup Factor:** {speedup_factor}x (calculated from actuals)
**Token Usage:** {total_tokens} tokens
**Actual Cost:** ${cost}
**Note:** All metrics shown are ACTUAL measurements from this workflow.
Future workflows may vary based on task complexity and dependencies.
**Merged Branches:**
- feature/task-0: {task 0 title}
- feature/task-1: {task 1 title}
- feature/task-2: {task 2 title}
**Test Results:**
- โ
All unit tests passing
- โ
All integration tests passing
- โ
Linter passing
- โ
Type checker passing
**Pull Requests:**
- Created: #{PR_NUM1}, #{PR_NUM2}, #{PR_NUM3}
**๐ฐ Cost Savings (Haiku Optimization):**
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ Cost Comparison: Sonnet vs Haiku Agents โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค โ Scenario 1: All Sonnet Agents โ โ Main agent (planning): $0.054 โ โ {N} execution agents: ${N ร 0.27} โ โ Total: ${total_sonnet}โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค โ Scenario 2: Haiku Agents (ACTUAL) โจ โ โ Main agent (planning): $0.054 โ โ {N} Haiku agents: ${N ร 0.04} โ โ Total: ${total_haiku} โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค โ ๐ต This Workflow Saved: ${savings} โ โ ๐ Cost Reduction: {percentage}% โ โ โก Speed Improvement: ~2x faster โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Annual projection (1,200 workflows): โข Old cost (Sonnet): ${total_sonnet ร 1200} โข New cost (Haiku): ${total_haiku ร 1200} โข Annual savings: ${savings ร 1200} ๐ฐ
**Next Steps:**
- [ ] Review merged code
- [ ] Deploy to staging
- [ ] Update documentation
- [ ] Announce to team
**Cleanup:**
- Worktrees removed: {N}
- Branches deleted: {N}
- Plan archived: .parallel/archive/20251025-042057/
๐ All tasks completed successfully via Contextune parallel execution!
๐ Powered by Script-Based Setup + Context-Grounded Research v0.5.5
โจ Scripts handle infrastructure, Haiku agents execute blindly
Calculate Cost Savings:
Use this formula to calculate actual costs:
# Cost per agent (Claude pricing as of Oct 2024)
SONNET_INPUT_COST = 3.00 / 1_000_000 # $3/MTok
SONNET_OUTPUT_COST = 15.00 / 1_000_000 # $15/MTok
HAIKU_INPUT_COST = 0.80 / 1_000_000 # $0.80/MTok
HAIKU_OUTPUT_COST = 4.00 / 1_000_000 # $4/MTok
# Average tokens per agent
MAIN_AGENT_INPUT = 18_000
MAIN_AGENT_OUTPUT = 3_000
EXECUTION_AGENT_INPUT_SONNET = 40_000
EXECUTION_AGENT_OUTPUT_SONNET = 10_000
EXECUTION_AGENT_INPUT_HAIKU = 30_000
EXECUTION_AGENT_OUTPUT_HAIKU = 5_000
# Calculate costs
main_cost = (MAIN_AGENT_INPUT * SONNET_INPUT_COST +
MAIN_AGENT_OUTPUT * SONNET_OUTPUT_COST)
sonnet_agent_cost = (EXECUTION_AGENT_INPUT_SONNET * SONNET_INPUT_COST +
EXECUTION_AGENT_OUTPUT_SONNET * SONNET_OUTPUT_COST)
haiku_agent_cost = (EXECUTION_AGENT_INPUT_HAIKU * HAIKU_INPUT_COST +
EXECUTION_AGENT_OUTPUT_HAIKU * HAIKU_OUTPUT_COST)
# Total costs
num_tasks = len(completed_tasks)
total_sonnet = main_cost + (num_tasks * sonnet_agent_cost)
total_haiku = main_cost + (num_tasks * haiku_agent_cost)
savings = total_sonnet - total_haiku
percentage = (savings / total_sonnet) * 100
# Format nicely
print(f"This workflow saved: ${savings:.2f} ({percentage:.0f}% reduction)")
Users can trigger this command with:
/contextune:parallel:execute (explicit)Contextune automatically detects these intents and routes to this command.
This command works in ALL projects after installing Contextune:
/plugin install slashsense
No project-specific configuration needed.
When suggesting next steps, mention:
/contextune:parallel:status - Monitor progress/contextune:parallel:cleanup - Clean up completed work/contextune:parallel:plan - Create development planTime Analysis for 5 tasks:
Planning: 60s
Spawn 5 agents: 5s
Each agent creates issue + worktree: 8s (concurrent!) โก
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Total Setup: 73s
Work time: Parallel โ
Time Analysis for 5 tasks:
Planning: 60s
Run setup_worktrees.sh: 5s (all 5 in parallel!)
Spawn 5 agents: 5s
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Total Setup: 70s
Work time: Parallel โ
Time Saved: 3 seconds (4% faster)
But more importantly:
Natural Language:
User: "work on auth, dashboard, and analytics in parallel"
You: Analyzing request... detected 3 independent tasks.
Creating parallel execution plan...
โ
Plan created: .parallel/plans/20251021-143000/
Setting up infrastructure...
โ
Created 3 worktrees in 5 seconds (script)
Spawning 3 autonomous Haiku agents...
๐ Agent 1: Auth implementation (worktrees/task-0)
๐ Agent 2: Dashboard UI (worktrees/task-1)
๐ Agent 3: Analytics tracking (worktrees/task-2)
[Agents work concurrently in pre-created worktrees]
โ
All tasks completed with parallel execution
Token efficiency: Significant reduction vs sequential approach
Creating PRs...
โ
3 PRs created from task files
Triggered via Contextune natural language detection.
Explicit Command:
User: "/contextune:parallel:execute"
You: [Load existing plan or ask for task list]
[Execute full parallel workflow as above]
Issue: "Worktree already exists"
git worktree list to see active worktreesgit worktree remove worktrees/task-0/contextune:parallel:cleanupIssue: "Setup script failed"
git --version (need 2.5+ for worktree support).parallel/plans/active/scripts/chmod +x .parallel/plans/active/scripts/setup_worktrees.shIssue: "Tests failing in subagent"
grep "^status:" .parallel/plans/active/tasks/task-*.mdIssue: "Merge conflicts"
Issue: "Subagent not responding"
Contextune v0.3.0 includes specialized Haiku agents for specific operations. Use them when you need focused capabilities:
Use for: Complete feature implementation from start to finish
Capabilities:
Cost: ~$0.04 per task When to use: Default agent for all parallel task execution
Use for: Git worktree lifecycle management
Capabilities:
Cost: ~$0.008 per operation When to use: Troubleshooting worktree issues, bulk cleanup, advanced worktree operations
Example:
# Use directly for troubleshooting
Task tool with subagent_type: "contextune:worktree-manager"
Prompt: "Diagnose and fix worktree lock files in .git/worktrees/"
Use for: GitHub issue management
Capabilities:
Cost: ~$0.01 per operation When to use: Bulk issue creation, issue management automation, complex labeling
Example:
# Use directly for bulk operations
Task tool with subagent_type: "contextune:issue-orchestrator"
Prompt: "Create 10 issues from this task list and label them appropriately"
Use for: Autonomous test execution and reporting
Capabilities:
Cost: ~$0.02 per test run When to use: Dedicated test execution, failure tracking, CI/CD integration
Example:
# Use directly for test automation
Task tool with subagent_type: "contextune:test-runner"
Prompt: "Run full test suite and create GitHub issues for any failures"
Use for: Workflow benchmarking and optimization
Capabilities:
Cost: ~$0.015 per analysis When to use: Performance monitoring, optimization analysis, cost tracking
Example:
# Use directly for performance analysis
Task tool with subagent_type: "contextune:performance-analyzer"
Prompt: "Analyze the last 5 parallel workflows and identify bottlenecks"
Three-Tier Model in Practice:
Tier 1 - Skills (Sonnet):
Tier 2 - Orchestration (Sonnet - You):
Tier 3 - Execution (Haiku):
Result: 81% overall cost reduction!
Example Workflow Costs:
5 Parallel Tasks (Old - All Sonnet):
Main agent (planning): $0.054
5 execution agents: $1.350 (5 ร $0.27)
Total: $1.404
5 Parallel Tasks (New - Haiku Agents):
Main agent (planning): $0.054 (Sonnet)
5 Haiku agents: $0.220 (5 ร $0.04)
Total: $0.274
Savings: $1.13 (81% reduction!)
Annual Savings (1,200 workflows/year):
Old cost: $1,684/year
New cost: $328/year
Savings: $1,356/year (81% reduction!)
Response Time:
Context Window:
Quality for Execution Tasks:
When to Use Each:
Use Sonnet when:
Use Haiku when:
Documentation:
docs/HAIKU_AGENT_ARCHITECTURE.md - Complete architecture specdocs/AGENT_INTEGRATION_GUIDE.md - Integration patternsdocs/COST_OPTIMIZATION_GUIDE.md - Cost tracking and ROIAgent Specifications:
agents/parallel-task-executor.md - Default execution agentagents/worktree-manager.md - Worktree specialistagents/issue-orchestrator.md - GitHub issue specialistagents/test-runner.md - Test execution specialistagents/performance-analyzer.md - Performance analysis specialistRelated Commands:
/contextune:parallel:plan - Create development plan/contextune:parallel:status - Monitor progress/contextune:parallel:cleanup - Clean up worktrees