From oma
Automated multi-agent orchestrator that spawns CLI subagents in parallel, coordinates via MCP Memory, and monitors progress. Use for orchestration, parallel execution, and automated multi-agent workflows.
npx claudepluginhub first-fluke/oh-my-agent --plugin omaThis skill uses the workspace's default tool permissions.
Automatically orchestrate multi-agent execution with task decomposition, native/fallback dispatch, memory coordination, progress monitoring, verification, QA cross-review, retry, and result collection.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
Automatically orchestrate multi-agent execution with task decomposition, native/fallback dispatch, memory coordination, progress monitoring, verification, QA cross-review, retry, and result collection.
.agents/oma-config.yaml, .codex/agents/*.toml, .gemini/agents/*.md, or fallback oh-my-ag agent:spawnoma verify, and QA cross-review loop.| Action | SSL primitive | Evidence |
|---|---|---|
| Read config and task context | READ | oma config, routing, request |
| Select dispatch path | SELECT | Native vs fallback |
| Write session state | WRITE | task board and memory files |
| Spawn agents | CALL_TOOL | native CLI or oh-my-ag agent:spawn |
| Poll progress | READ | progress/result files |
| Run verification | CALL_TOOL | oma verify, tests, QA |
| Update retry state | UPDATE_STATE | loop counters and CD metrics |
| Report final result | NOTIFY | compiled summary |
oma agent:spawn <agent-type> "<task>" <session-id> -w <workspace>
oma verify <agent-type> --workspace <workspace> --json
When native runtime dispatch is available, prefer the runtime-specific native path listed in this skill before falling back to oma agent:spawn.
| Scope | Resource target |
|---|---|
LOCAL_FS | Session, task-board, progress, result, config files |
PROCESS | Agent CLI processes and verify scripts |
MEMORY | Session state and clarification debt |
CODEBASE | Workspaces owned by spawned agents |
target_vendor === current_runtime_vendor and the runtime has a verified native path, use native dispatch.oh-my-ag agent:spawn.Current native executor paths:
claude --agent <agent>codex exec "@agent ..." using .codex/agents/*.tomlgemini -p "@agent ..." using .gemini/agents/*.mdVendor-specific execution protocols are injected automatically for fallback CLI runs.
| Setting | Default | Description |
|---|---|---|
| MAX_PARALLEL | 3 | Max concurrent subagents |
| MAX_RETRIES | 2 | Retry attempts per failed task |
| POLL_INTERVAL | 30s | Status check interval |
| MAX_TURNS (impl) | 20 | Turn limit for backend/frontend/mobile |
| MAX_TURNS (review) | 15 | Turn limit for qa/debug |
| MAX_TURNS (plan) | 10 | Turn limit for pm |
Memory provider and tool names are configurable via mcp.json:
{
"memoryConfig": {
"provider": "serena",
"basePath": ".serena/memories",
"tools": {
"read": "read_memory",
"write": "write_memory",
"edit": "edit_memory"
}
}
}
PHASE 1 - Plan: Analyze request -> decompose tasks -> generate session ID
PHASE 2 - Setup: Use memory write tool to create orchestrator-session.md + task-board.md
PHASE 3 - Execute: Spawn agents by priority tier (never exceed MAX_PARALLEL)
PHASE 4 - Monitor: Poll every POLL_INTERVAL; handle completed/failed/crashed agents
PHASE 4.5 - Verify: Run oma verify {agent-type} per completed agent
PHASE 5 - Collect: Read all result-{agent}-{sessionId}.md, compile summary, cleanup progress files
See resources/subagent-prompt-template.md for prompt construction.
See resources/memory-schema.md for memory file formats.
| File | Owner | Others |
|---|---|---|
orchestrator-session.md | orchestrator | read-only |
task-board.md | orchestrator | read-only |
progress-{agent}[-{sessionId}].md | that agent | orchestrator reads |
result-{agent}[-{sessionId}].md | that agent | orchestrator reads |
After each agent completes, enter an iterative review loop — not a single-pass verification.
Agent completes work
↓
[1] Mechanical Self-Check: lint, type-check, tests, diff scope
↓
[2] Verify: Run `oma verify {agent-type} --workspace {workspace}`
↓ FAIL → Agent receives feedback, fixes, back to [1]
↓ PASS
[3] Cross-Review: QA agent reviews the changes
↓ FAIL → Agent receives review feedback, fixes, back to [1]
↓ PASS
Accept result ✓
[1] Mechanical Self-Check (formerly "Self-Review"): Before requesting external review, the implementation agent must:
⚠️ Quality judgment is NOT performed in this step. Design quality, architecture alignment, and acceptance criteria satisfaction are evaluated exclusively in [3] Cross-Review by the QA agent. Reason: Self-evaluation bias — agents consistently overrate their own output (ref: Anthropic harness design research).
[2] Automated Verify:
oma verify {agent-type} --workspace {workspace} --json
[3] Cross-Review: Spawn QA agent to review the changes:
docs/CODE-REVIEW.md exists, QA agent uses it as the review checklist| Counter | Max | On Exceeded |
|---|---|---|
| Self-check + fix cycles | 3 | Escalate to cross-review regardless |
| Cross-review rejections | 2 | Report to user with review history |
| Total loop iterations | 5 | Force-complete with quality warning |
When feeding review results back to the implementation agent:
## Review Feedback (iteration {n}/{max})
**Reviewer**: {self / verify / qa-agent}
**Verdict**: FAIL
**Issues**:
1. {specific issue with file and line reference}
2. {specific issue}
**Fix instruction**: {what to change}
This replaces single-pass verification. Most "nitpicking" should happen agent-to-agent. Human review is reserved for final approval, not catching lint errors.
Track user corrections during session execution. See ../_shared/core/session-metrics.md for full protocol.
When user sends feedback during session:
| CD Score | Action |
|---|---|
| CD >= 50 | RCA Required: QA agent must add entry to lessons-learned.md |
| CD >= 80 | Session Pause: Request user to re-specify requirements |
redo >= 2 | Scope Lock: Request explicit allowlist confirmation before continuing |
After each user correction event:
[EDIT]("session-metrics.md", append event to Events table)
At session end, if CD >= 50:
lessons-learned.md with prevention measuresresources/subagent-prompt-template.mdresources/memory-schema.mdconfig/cli-config.yamlscripts/spawn-agent.sh, scripts/parallel-run.sh, scripts/verify.shtemplates/../_shared/core/skill-routing.mdscripts/verify.sh <agent-type>../_shared/core/session-metrics.md../_shared/core/api-contracts/../_shared/core/context-loading.md../_shared/core/difficulty-guide.md../_shared/core/reasoning-templates.md../_shared/core/clarification-protocol.md../_shared/core/context-budget.md../_shared/core/lessons-learned.md