AI Agent

Parallel Agent Workflows

Install
1
Install the plugin
$
npx claudepluginhub lukeslp/geepers-mcp --plugin geepers-mcp

Want just this agent?

Add to a custom plugin, then install with one command.

Description

Powerful agent combinations that work synergistically when run together.

Tool Access
All tools
Requirements
Requires power tools
Agent Content

Parallel Agent Workflows

Powerful agent combinations that work synergistically when run together.


Workflow 1: Session Startup (Recommended)

Pattern

PARALLEL: scout + planner
THEN: conductor (if direction unclear) OR direct to focused agent

What Happens

  • scout (5 min): Scans project health, identifies issues
  • planner (5 min): Prioritizes tasks, identifies dependencies
  • Result: Clear picture of current state + roadmap for session

Time Comparison

  • Sequential: 15 min (scout → planner → analysis)
  • Parallel: 10 min (scout + planner → analysis)
  • Savings: 5 minutes + better context

Use Cases

  • Starting work on a project
  • Returning to project after time away
  • Morning session planning
  • Before major refactoring

Example

# Start parallel agents
scout --project=wordblocks
planner --project=wordblocks

# While running, you'll see both:
# - Scout findings (issues, quick wins)
# - Planner recommendations (prioritized tasks)

# Then route based on findings
builder --queue=wordblocks-queue.md  # If implementation phase
# OR
orchestrator_quality --findings=scout-report.md  # If cleanup needed

Workflow 2: Feature Implementation Pipeline

Pattern

1. planner       → Create prioritized task queue
2. builder       → Implement each item atomically
3. integrator    → Verify cross-system integrity
4. critic        → Assess architectural impact
5. repo          → Clean git history

What Happens

  • Planner: Breaks down feature into tasks with dependencies
  • Builder: Implements each task following conventions
  • Integrator: Checks that pieces work together
  • Critic: Reviews if architecture stayed clean
  • Repo: Ensures git history is readable

Time Comparison

  • Ad-hoc: 6 hours (code, debug, rework, cleanup)
  • Pipeline: 4 hours (structured, fewer regressions)
  • Savings: 2 hours + better code quality

When to Use

  • Implementing major features (>3 files affected)
  • Multi-developer work
  • High-stakes features (auth, payment, core logic)

Example

# Phase 1: Planning
planner --project=corpus --task="Add lemmatization"
# Output: corpus-queue.md with 5 prioritized tasks

# Phase 2: Building
builder --project=corpus --queue=corpus-queue.md
# Implements: DB migration → API endpoint → UI component → tests

# Phase 3: Integration testing
integrator --project=corpus --files="api.py,ui.tsx,schema.py"
# Checks: API works with updated schema, UI calls endpoint correctly

# Phase 4: Architecture review
critic --project=corpus --focus="lemmatization-feature"
# Assesses: Did we add complexity? Is it maintainable?

# Phase 5: Git cleanup
repo --project=corpus --cleanup=true

Workflow 3: Quality Audit Sprint

Pattern

PARALLEL: scout + critic + testing + security
THEN: orchestrator_quality (synthesize findings)

What Happens

  • Scout (concurrent): Code quality scan
  • Critic (concurrent): UX/architecture issues
  • Testing (concurrent): Test coverage analysis
  • Security (concurrent): Vulnerability scan
  • Orchestrator (sequential): Summarizes, prioritizes, routes fixes

Time Comparison

  • Sequential: 120 min (4 audits × 30 min each)
  • Parallel: 45 min (30 min audit + 15 min synthesis)
  • Savings: 75 minutes

When to Use

  • Before major releases
  • Quarterly code reviews
  • After major refactoring
  • When starting work on unfamiliar codebase

Quality Coverage

  • Code quality issues
  • UX friction and design problems
  • Technical debt inventory
  • Test coverage gaps
  • Security vulnerabilities
  • Dependency risks

Example

# Start all audits in parallel
scout --project=wordblocks &
critic --project=wordblocks &
testing --project=wordblocks &
security --project=wordblocks &

# Wait for all to complete, then synthesize
wait
orchestrator_quality \
  --scout=reports/scout-wordblocks.md \
  --critic=reports/critic-wordblocks.md \
  --testing=reports/testing-wordblocks.md \
  --security=reports/security-wordblocks.md

Workflow 4: Health & Performance Check

Pattern

PARALLEL: canary + diag + perf
THEN: services (if action needed)

What Happens

  • Canary (3 min): Quick service health snapshot
  • Diag (5 min): Deep system analysis
  • Perf (5 min): Performance metrics and bottlenecks
  • Services (varies): Apply fixes if needed

Time Comparison

  • Sequential: 20 min (each then next)
  • Parallel: 8 min (all at once + fix time)
  • Savings: 12 minutes per check

When to Use

  • Daily health monitoring
  • Weekly performance reviews
  • After deploying changes
  • When users report slowness
  • Capacity planning

Health Metrics Collected

  • Service availability
  • Memory/CPU usage
  • Error rates
  • Database query performance
  • API response times

Example

# Morning health check
canary &
diag &
perf &
wait

# If canary found issues, drill deeper
if [ $CANARY_STATUS = "WARN" ]; then
  services --action=diagnose --issue=$CANARY_FINDING
fi

Workflow 5: Refactoring Campaign

Pattern

1. scout      → Identify refactoring opportunities
2. PARALLEL: critic + snippets
3. planner    → Prioritize refactoring tasks
4. scalpel    → Implement carefully
5. integrator → Verify no regressions

What Happens

  • Scout: Finds code that needs refactoring
  • Critic: Assesses architectural impact
  • Snippets: Extracts reusable patterns
  • Planner: Creates refactoring task queue
  • Scalpel: Implements with surgical precision
  • Integrator: Ensures nothing broke

Time Comparison

  • Naive: 8 hours (code + debug + rework)
  • Structured: 5 hours (planned + careful + verified)
  • Savings: 3 hours + fewer bugs

When to Use

  • Technical debt paydown sprints
  • Before adding major features
  • Quarterly code health initiatives
  • Code smell cleanup

Refactoring Scope

  • Duplicate code consolidation
  • Complex function decomposition
  • Module reorganization
  • Pattern standardization

Example

# Phase 1: Identification
scout --project=diachronica --focus="refactoring"
# Finds: 12 opportunities (duplication, long functions, etc)

# Phase 2: Impact assessment (parallel)
critic --project=diachronica &
snippets --project=diachronica --extract=patterns &
wait

# Phase 3: Planning
planner --project=diachronica \
  --source=scout-report.md,critic-report.md

# Phase 4-5: Implementation with verification
for task in $(cat diachronica-queue.md | grep "^## ")
do
  scalpel --task=$task
  integrator --verify-no-regressions
done

Workflow 6: Documentation & Knowledge

Pattern

PARALLEL: scout + snippets
THEN: docs

What Happens

  • Scout (5 min): Generates insights about codebase
  • Snippets (5 min): Extracts reusable patterns
  • Docs (15 min): Synthesizes into documentation

Time Comparison

  • Manual: 60 min (reading code + writing)
  • Automated: 25 min (agents + review)
  • Savings: 35 minutes + higher accuracy

When to Use

  • After major features complete
  • Quarterly knowledge updates
  • Onboarding new developers
  • Creating architecture documentation

Documentation Output

  • API documentation
  • Architecture diagrams/descriptions
  • Pattern guides
  • Module organization
  • Common workflows

Example

# Collect insights
scout --project=diachronica --generate-insights &
snippets --project=diachronica --extract=patterns &
wait

# Generate documentation
docs --project=diachronica \
  --insights=scout-report.md \
  --patterns=snippets-report.md \
  --output=ARCHITECTURE.md

Workflow 7: Bug Investigation & Fix

Pattern

1. diag       → Root cause analysis
2. scalpel    → Surgical fix
3. testing    → Add regression test
4. repo       → Document fix

What Happens

  • Diag: Analyzes logs, finds actual problem (not symptom)
  • Scalpel: Makes precise fix to complex code
  • Testing: Prevents same bug recurring
  • Repo: Documents why bug happened

Time Comparison

  • Ad-hoc: 90 min (guess → try → fail → debug → retry)
  • Systematic: 45 min (diagnose → fix → test → document)
  • Savings: 45 minutes + understanding

When to Use

  • Production bugs
  • Mysterious failures
  • Performance problems
  • Intermittent issues

Bug Investigation Depth

  • Error patterns in logs
  • Resource utilization at time of failure
  • Correlation with recent changes
  • Impact scope

Example

# Diagnose the issue
diag --service=wordblocks --since="2 hours ago"
# Output: "Memory leak in WebSocket handler, lines 234-245"

# Fix with precision
scalpel --file=src/websocket.ts --lines=234-245

# Prevent recurrence
testing --add-regression-test \
  --file=test/websocket.test.ts \
  --scenario="memory-leak-on-disconnect"

# Document findings
repo --commit-message="fix: Prevent WebSocket memory leak on client disconnect

Root cause: Connection cleanup wasn't removing event listeners.
See diag report from $(date).

Fixes: #2847"

Workflow Selection Guide

GoalWorkflowTimeComplexity
Start session focused#1 (Startup)10 minLow
Build feature right#2 (Implementation)4 hoursHigh
Comprehensive audit#3 (Quality Sprint)45 minMedium
Monitor health#4 (Health Check)8 minLow
Clean up code#5 (Refactoring)5 hoursHigh
Create docs#6 (Documentation)25 minLow
Fix production issue#7 (Bug Investigation)45 minMedium

Pro Tips for Parallel Workflows

Tip 1: Output Consistency

Ensure parallel agents write to different files:

# Good: Different output files
scout --output=reports/scout-{project}.md
planner --output=reports/planner-{project}.md

# Bad: Same output file (conflict)
scout --output=report.md
planner --output=report.md

Tip 2: Sequential Dependencies

Respect task dependencies:

# Good: Scout runs first, then Planner sees results
scout && planner

# Bad: Planner runs before Scout findings exist
planner & scout

Tip 3: Monitor Progress

Use status to track workflow progress:

scout &
planner &
PIDS=$!
status --watch --pids=$PIDS

Tip 4: Batch Periodic Checks

Run Workflow #4 on schedule:

# In crontab
0 */4 * * * /path/to/workflow-health-check.sh

Tip 5: Cost-Benefit Analysis

Use heavier workflows for higher-stakes work:

# Quick fix: Skip Workflow #2, use quickwin
# Major feature: Use full Workflow #2 (Planner → Builder → Integrator)
# Production issue: Use full Workflow #7 (Diag → Fix → Test → Docs)

Troubleshooting

Problem: Agents running out of order

Solution: Use explicit sequencing

# Bad: No guarantee of order
geepers_agent1 &
geepers_agent2 &
geepers_agent3 &

# Good: Explicit sequence
geepers_agent1 && geepers_agent2 && geepers_agent3

Problem: Conflicting changes from parallel agents

Solution: Design non-overlapping scopes

# Scout analyzes code quality
# Planner creates task queue (different output)
# They don't conflict because different outputs

Problem: Workflow takes too long

Solution: Profile and optimize

time scout --project=X
time planner --project=X
# If one is slow, consider parallelizing differently

Last Updated: 2026-01-05 Part of Agent Optimization Analysis

Stats
Stars1
Forks1
Last CommitMar 16, 2026
Actions

Similar Agents