Orchestrates 6-phase multi-model development workflow (Research→Ideation→Plan→Execute→Optimize→Review) for a task, routing frontend to Gemini, backend to Codex, with quality gates.
From everything-claude-codenpx claudepluginhub sandaruwanweerawardhana/claude-code/multi-workflowOrchestrates 6-phase multi-model development workflow (Research→Ideation→Plan→Execute→Optimize→Review) for a task, routing frontend to Gemini, backend to Codex, with quality gates.
/multi-workflowOrchestrates 6-phase multi-model development workflow (Research→Ideation→Plan→Execute→Optimize→Review), routing frontend tasks to Gemini, backend to Codex, with quality gates and MCP services.
/multi-workflowOrchestrates 6-phase multi-model development workflow (Research→Ideation→Plan→Execute→Optimize→Review) for a task, routing frontend to Gemini, backend to Codex, with quality gates.
/multi-workflow[Requires codeagent-wrapper] Multi-model collaborative development workflow with intelligent routing — Frontend→Gemini, Backend→Codex. Full Research→Ideation→Plan→Execute→Optimize→Review cycle.
/multi-workflowOrchestrates 6-phase multi-model development workflow (Research→Ideation→Plan→Execute→Optimize→Review) for a task, routing frontend to Gemini, backend to Codex, with quality gates.
/multi-workflowMulti-model collaborative development workflow (Research → Ideation → Plan → Execute → Optimize → Review), with intelligent routing: Frontend → Gemini, Backend → Codex.
Multi-model collaborative development workflow (Research → Ideation → Plan → Execute → Optimize → Review), with intelligent routing: Frontend → Gemini, Backend → Codex.
Structured development workflow with quality gates, MCP services, and multi-model collaboration.
/workflow <task description>
You are the Orchestrator, coordinating a multi-model collaborative system (Research → Ideation → Plan → Execute → Optimize → Review). Communicate concisely and professionally for experienced developers.
Collaborative Models:
Call syntax (parallel: run_in_background: true, sequential: false):
# New session call
Bash({
command: "~/.claude/bin/codeagent-wrapper {{LITE_MODE_FLAG}}--backend <codex|gemini> {{GEMINI_MODEL_FLAG}}- \"$PWD\" <<'EOF'
ROLE_FILE: <role prompt path>
<TASK>
Requirement: <enhanced requirement (or $ARGUMENTS if not enhanced)>
Context: <project context and analysis from previous phases>
</TASK>
OUTPUT: Expected output format
EOF",
run_in_background: true,
timeout: 3600000,
description: "Brief description"
})
# Resume session call
Bash({
command: "~/.claude/bin/codeagent-wrapper {{LITE_MODE_FLAG}}--backend <codex|gemini> {{GEMINI_MODEL_FLAG}}resume <SESSION_ID> - \"$PWD\" <<'EOF'
ROLE_FILE: <role prompt path>
<TASK>
Requirement: <enhanced requirement (or $ARGUMENTS if not enhanced)>
Context: <project context and analysis from previous phases>
</TASK>
OUTPUT: Expected output format
EOF",
run_in_background: true,
timeout: 3600000,
description: "Brief description"
})
Model Parameter Notes:
{{GEMINI_MODEL_FLAG}}: When using --backend gemini, replace with --gemini-model gemini-3-pro-preview (note trailing space); use empty string for codexRole Prompts:
| Phase | Codex | Gemini |
|---|---|---|
| Analysis | ~/.claude/.ccg/prompts/codex/analyzer.md | ~/.claude/.ccg/prompts/gemini/analyzer.md |
| Planning | ~/.claude/.ccg/prompts/codex/architect.md | ~/.claude/.ccg/prompts/gemini/architect.md |
| Review | ~/.claude/.ccg/prompts/codex/reviewer.md | ~/.claude/.ccg/prompts/gemini/reviewer.md |
Session Reuse: Each call returns SESSION_ID: xxx, use resume xxx subcommand for subsequent phases (note: resume, not --resume).
Parallel Calls: Use run_in_background: true to start, wait for results with TaskOutput. Must wait for all models to return before proceeding to next phase.
Wait for Background Tasks (use max timeout 600000ms = 10 minutes):
TaskOutput({ task_id: "<task_id>", block: true, timeout: 600000 })
IMPORTANT:
timeout: 600000, otherwise default 30 seconds will cause premature timeout.TaskOutput, NEVER kill the process.AskUserQuestion to ask user whether to continue waiting or kill task. Never kill directly.[Mode: X], initial is [Mode: Research].Research → Ideation → Plan → Execute → Optimize → Review.AskUserQuestion tool for user interaction when needed (e.g., confirmation/selection/approval).Use external tmux/worktree orchestration when the work must be split across parallel workers that need isolated git state, independent terminals, or separate build/test execution. Use in-process subagents for lightweight analysis, planning, or review where the main session remains the only writer.
node scripts/orchestrate-worktrees.js .claude/plan/workflow-e2e-test.json --execute
Task Description: $ARGUMENTS
[Mode: Research] - Understand requirements and gather context:
mcp__ace-tool__enhance_prompt, replace original $ARGUMENTS with enhanced result for all subsequent Codex/Gemini calls. If unavailable, use $ARGUMENTS as-is.mcp__ace-tool__search_context. If unavailable, use built-in tools: Glob for file discovery, Grep for symbol search, Read for context gathering, Task (Explore agent) for deeper exploration.[Mode: Ideation] - Multi-model parallel analysis:
Parallel Calls (run_in_background: true):
Wait for results with TaskOutput. Save SESSION_ID (CODEX_SESSION and GEMINI_SESSION).
Follow the IMPORTANT instructions in Multi-Model Call Specification above
Synthesize both analyses, output solution comparison (at least 2 options), wait for user selection.
[Mode: Plan] - Multi-model collaborative planning:
Parallel Calls (resume session with resume <SESSION_ID>):
resume $CODEX_SESSION, output backend architectureresume $GEMINI_SESSION, output frontend architectureWait for results with TaskOutput.
Follow the IMPORTANT instructions in Multi-Model Call Specification above
Claude Synthesis: Adopt Codex backend plan + Gemini frontend plan, save to .claude/plan/task-name.md after user approval.
[Mode: Execute] - Code development:
[Mode: Optimize] - Multi-model parallel review:
Parallel Calls:
Wait for results with TaskOutput. Integrate review feedback, execute optimization after user confirmation.
Follow the IMPORTANT instructions in Multi-Model Call Specification above
[Mode: Review] - Final evaluation: