From emasoft-orchestrator-agent
Use when coordinating work among multiple developers. Trigger with orchestration requests.
npx claudepluginhub emasoft/emasoft-plugins --plugin emasoft-orchestrator-agentThis skill uses the workspace's default tool permissions.
This skill teaches how to coordinate work among multiple developers using orchestration patterns.
README.mdreferences/agent-selection-guide-part1-language-agents.mdreferences/agent-selection-guide-part2-specialized-agents.mdreferences/agent-selection-guide-part3-decision-selection.mdreferences/agent-selection-guide-part4-patterns-practices.mdreferences/agent-selection-guide-part5-advanced.mdreferences/agent-selection-guide.mdreferences/archive-structure.mdreferences/changelog-writing-guidelines.mdreferences/delegation-checklist.mdreferences/language-verification-checklists-part1-core-languages.mdreferences/language-verification-checklists-part2-extended-platforms.mdreferences/language-verification-checklists-part3-swift-and-universal.mdreferences/language-verification-checklists.mdreferences/log-formats.mdreferences/non-blocking-patterns.mdreferences/op-classify-task-complexity.mdreferences/op-define-scope-boundaries.mdreferences/op-identify-task-dependencies.mdreferences/op-select-agent-for-task.mdOrchestrates multi-agent parallel execution for complex tasks like features, refactoring, testing, reviews, and documentation using cc-mirror tracking and TodoWrite visibility.
Decomposes dev goals into parallel tasks for AI agents, assigns up to 20, monitors every 10-15min, verifies 4 loops before PR integration. For task orchestration in AI Maestro.
Use when a task benefits from multiple Claude instances collaborating with peer-to-peer messaging - parallel research, multi-module features, cross-layer changes, or competing hypothesis debugging. Not for simple independent tasks (use parallel-execution) or sequential tasks (use delegated-execution).
Share bugs, ideas, or general feedback.
This skill teaches how to coordinate work among multiple developers using orchestration patterns.
| Output Type | Description | Example |
|---|---|---|
| TaskCreate calls | Task definitions with success criteria | TaskCreate(subject="Implement module X", ...) |
| Task assignments | Agent assignments via AI Maestro or Task tool | Message to remote-dev-001 with task details |
| Progress reports | Regular status updates from monitoring | "Task 1: 60% complete, Task 2: blocked on DB" |
| Completion signals | Verification of all tasks done | "All 5 tasks completed, ready for integration" |
| Escalation requests | Blocked task escalations to user/EAMA | "Task 3 blocked: missing API credentials" |
For AI Maestro messaging and Claude Code Tasks API, see orchestration-api-commands.md:
Orchestration is the act of coordinating multiple concurrent tasks performed by different agents to achieve a common goal. An orchestrator:
Evaluate task complexity to determine appropriate planning investment.
Contents:
Read first - Understanding task complexity guides all subsequent decisions.
Select the right specialized agent for each task based on language, domain, and capabilities.
Contents:
Read second - Essential for effective task delegation. Prerequisite: Task Complexity Classifier
Conduct interactive setup to establish project parameters, team configuration, and quality standards.
Contents:
Read third - Run before any project work begins. Prerequisite: Agent Selection Guide
Ensure code quality, build success, and release readiness with language-specific verification standards.
Contents:
Read last - Apply when reviewing deliverables. Prerequisite: Project Setup Menu
PROACTIVE monitoring of implementer agents to ensure progress and completion.
Contents:
When to use: When agent silent 15+ min, reports blocker, wants to stop, or task needs verification.
MANDATORY 4-verification-loops before any Pull Request is approved.
Contents:
When to use: At task assignment, when agent asks "Can I make a PR?", when tracking verification.
RULE 15: The orchestrator NEVER writes production code.
Contents:
When to use: Before any action (self-check), when tempted to write code directly.
RULE 14: User requirements cannot be changed without explicit user approval.
Contents:
When to use: At project start, when requirement cannot be implemented as stated.
RULE 16: Only the orchestrator can send/receive messages or commit changes.
Contents:
When to use: Before spawning sub-agents, when sub-agent needs external communication.
RULE 17: The orchestrator must ALWAYS remain responsive.
Contents:
When to use: Before any long-running command, when spawning agents, when checking messages.
Detailed role boundaries for orchestrator behavior.
Infrastructure task delegation procedures.
Detailed examples of orchestration patterns in practice.
Contents:
When to use: When you need concrete examples of orchestration workflows.
AI Maestro messaging and Claude Code Tasks API reference.
Contents:
When to use: When sending messages or creating/updating tasks.
Canonical text for RULE 14: User Requirements Are Immutable.
When to use: When enforcing requirement immutability at all orchestration phases.
Template for defining worker agent role boundaries in task delegations.
When to use: When delegating tasks to worker agents.
Orchestration workflow checklists for task decomposition, assignment, monitoring, and integration.
When to use: When executing orchestration workflows.
Log format specifications for orchestration activities.
When to use: When creating delegation logs and status reports.
Archive directory structure for completed work.
When to use: When archiving completed tasks and projects.
Copy this checklist and track your progress:
agent-messaging skill if no update receivedThis example shows how to decompose a feature request into tasks and delegate them.
Input: A feature request from the user (via EAMA):
Feature: Add user authentication module
Requirements:
- OAuth2 login with Google provider
- Session management with JWT tokens
- Rate limiting on login endpoint (5 attempts per minute)
Output: The orchestrator produces a task plan with assignments:
Task Plan: Authentication Module (3 parallel tasks)
Task 1: OAuth2 Integration
Assign to: epa-backend-001
Scope: src/auth/oauth.py, src/auth/providers/google.py
Success criteria: Google OAuth2 flow completes end-to-end, tokens stored
Dependencies: None
Task 2: Session Management
Assign to: epa-backend-002
Scope: src/auth/session.py, src/auth/jwt.py
Success criteria: JWT issued on login, validated on protected routes, expires correctly
Dependencies: None (uses mock OAuth response until Task 1 merges)
Task 3: Rate Limiting
Assign to: epa-backend-003
Scope: src/middleware/rate_limit.py, tests/test_rate_limit.py
Success criteria: Login endpoint rejects 6th attempt within 60s window, returns 429
Dependencies: None (uses mock login endpoint)
Result: 3 tasks delegated in parallel, each with clear scope and success criteria.
For more detailed orchestration examples with transcripts, see orchestration-examples.md:
| Issue | Cause | Resolution |
|---|---|---|
| Agent unresponsive | Agent crashed or blocked | Poll until response; reassign if unrecoverable |
| Task conflict | Same file modified by multiple agents | Assign non-overlapping scope |
| Verification loop stuck | Agent doesn't check changes | Send explicit verification message |
| Escalation pending | User unavailable | Queue issue, continue other work |
See individual reference files for detailed troubleshooting.