npx claudepluginhub tundraray/overture --plugin gamedev-overtureWant just this skill?
Then install: npx claudepluginhub u/[userId]/[slug]
This skill guides subagent coordination through game development workflows. Automatically loaded when orchestrating multiple agents, managing workflow phases, determining autonomous execution mode, or when "orchestration", "workflow phases", "scale determination", "stop points", or "autonomous mode" are mentioned.
This skill uses the workspace's default tool permissions.
Subagents Gamedev Orchestration Guide
Role: The Orchestrator
The orchestrator coordinates subagents like a conductor—directing the musicians without playing the instruments.
All investigation, analysis, and implementation work flows through specialized subagents.
Automatic Responses
| Trigger | Action |
|---|---|
| New task | Invoke requirement-analyzer |
| Flow in progress | Check scale determination table for next subagent |
| Phase completion | Delegate to the appropriate subagent |
| Stop point reached | Wait for user approval |
First Action Rule
Every new task begins with requirement-analyzer.
Session Initialization Protocol
Before the orchestrator performs any other action in a new session:
- Date verification: Run
datecommand to get current date (do not rely on training data) - Load project context: Execute project-context skill to understand project-specific constraints
- Pre-edit gate: Before any editing operation, run rule-advisor first to assess the task
These steps ensure the orchestrator has current context before making decisions.
Decision Flow When Receiving Tasks
graph TD
Start[Receive New Task] --> RA[Analyze requirements with requirement-analyzer]
RA --> Scale[Scale assessment + Development Mode detection]
Scale --> Scenario{Scenario detection}
Scenario -->|New project| A[Scenario A flow]
Scenario -->|Existing project| B[Scenario B / Medium / Small flow]
During flow execution, determine next subagent according to scale determination table
Requirement Change Detection During Flow
During flow execution, if detecting the following in user response, stop flow and go to requirement-analyzer:
- Mentions of new features/behaviors (additional operation methods, display on different screens, etc.)
- Additions of constraints/conditions (data volume limits, permission controls, etc.)
- Changes in technical requirements (processing methods, output format changes, etc.)
- Changes in game design direction (core loop changes, new mechanics, progression system overhaul)
If any one applies → Restart from requirement-analyzer with integrated requirements
Available Subagents
Shared Agents (from overture framework)
- requirement-analyzer: Requirement analysis, work scale determination, Development Mode detection
- quality-fixer: Self-contained processing for overall quality assurance and fixes until completion
- task-decomposer: Appropriate task decomposition of work plans
- task-executor: Individual task execution and structured response
- integration-test-reviewer: Review integration/E2E tests for skeleton compliance and quality
- security-reviewer: Security compliance review against Design Doc and coding-principles (read-only)
- technical-designer: ADR/Design Doc creation
- document-reviewer: Single document quality and rule compliance check
- design-sync: Design Doc consistency verification across multiple documents
- acceptance-test-generator: Generate integration and E2E test skeletons from Design Doc ACs
- expert-analyst: Parallel multi-perspective analysis from expert viewpoint
- codebase-scanner: Scans for dead code, orphan files, unused exports, and suspicious areas (read-only)
- cleanup-executor: Safely removes confirmed dead code with git backup and build verification
- code-reviewer: Code review for quality, patterns, and standards compliance
- code-verifier: Verification of code correctness and completeness
- investigator: Deep investigation of issues and root cause analysis
- rule-advisor: Task strategy assessment and metacognitive guidance
- scope-discoverer: Scope analysis and dependency discovery
- solver: Problem-solving and solution design
- verifier: Verification of task completion and acceptance criteria
Gamedev-Specific Agents
- market-analyst: Market analysis, competitor research, Go/No-Go recommendations
- producer-agent: Project config, team selection, resource planning, timeline
- sr-game-designer: GDD creation (vision, pillars, core loop, progression)
- mid-game-designer: Feature specifications, user stories, acceptance criteria, balancing
- mechanics-developer: Game mechanics architecture (state machines, physics, pooling, events)
- game-feel-developer: Game feel specification (screen shake, particles, audio, tweens)
- sr-game-artist: Art direction, style guide, color palette, reference sheets
- technical-artist: Pipeline specs, atlas optimization, shader requirements
- ui-ux-agent: Game UI/UX: HUD, menus, interaction patterns, accessibility
- data-scientist: Analytics/telemetry design, KPIs, A/B tests, dashboards
- qa-agent: Test plans, performance validation, playtesting protocols
- gamedev-work-planner: Game-specific work planning with 6-phase structure
- game-researcher: External game research, data collection, source code analysis (used by game-analyze command)
Dropped agents (replaced by gamedev agents): prd-creator (→ market-analyst + sr-game-designer), ux-designer (→ ui-ux-agent), work-planner (→ gamedev-work-planner)
Orchestration Principles
Task Assignment with Responsibility Separation
Assign work based on each subagent's responsibilities:
What to delegate to task-executor:
- Implementation work and test addition
- Confirmation of added tests passing (existing tests are not covered)
- Do not delegate quality assurance
What to delegate to quality-fixer:
- Overall quality assurance (static analysis, style check, all test execution, etc.)
- Complete execution of quality error fixes
- Self-contained processing until fix completion
- Final approved judgment (only after fixes are complete)
Constraints Between Subagents
Important: Subagents cannot directly call other subagents—all coordination flows through the orchestrator.
Orchestrator Never Writes Directly
All document and code operations MUST go through agents.
File Ownership by Agent
| File Pattern | Owner Agent |
|---|---|
docs/game-design/*.md (GDD) | sr-game-designer |
docs/game-design/features/*.md | mid-game-designer |
docs/market-research/*.md | market-analyst |
docs/art/*.md | sr-game-artist |
docs/analytics/*.md | data-scientist |
docs/handoffs/*.md | producer-agent |
docs/adr/*.md | technical-designer |
docs/design/*.md | technical-designer |
docs/plans/*.md (work plans) | gamedev-work-planner |
docs/plans/tasks/<plan-name>/*.md | task-decomposer |
src/**/*, tests/**/* (code) | task-executor |
docs/game-research/**/*.md | game-researcher |
| Any file (quality fixes) | quality-fixer |
Rules:
- Create/edit files only through the owner agent
- For revisions after review: call owner agent with
mode: update - When document-reviewer returns
needs_revision: userevision_agentfield to identify owner
Explicit Stop Points
Autonomous execution MUST stop and wait for user input at these points. Use AskUserQuestion tool to present confirmations and questions in a structured format.
| Phase | Stop Point | User Action Required |
|---|---|---|
| Requirements | After requirement-analyzer completes | Confirm requirements + Development Mode selection |
| Market Analysis | After market-analyst completes (Scenario A only) | Go/No-Go decision |
| GDD | After document-reviewer completes GDD review | Approve GDD |
| ADR | After document-reviewer completes ADR review (if ADR created) | Approve ADR |
| Design | After design-sync completes consistency verification | Approve Design Doc |
| Work Plan | After gamedev-work-planner creates plan | Batch approval for implementation phase |
After batch approval: Autonomous execution proceeds without stops until completion or escalation
Scale Determination and Document Requirements
| Scale | File Count | Market Analysis | GDD | ADR | Design Doc | Work Plan |
|---|---|---|---|---|---|---|
| Small | 1-2 | Not needed | Not needed | Not needed | Not needed | Simplified |
| Medium | 3-5 | Not needed | Conditional※1 | Conditional※2 | Required | Required |
| Large | 6+ | Required※3 | Required | Conditional※2 | Required | Required |
※1: When new game mechanics or systems are involved ※2: Architecture changes, new technology, or data flow changes ※3: For new projects (Scenario A). Skip for features in existing projects (Scenario B)
How to Call Subagents
Execution Method
Call subagents using the Task tool:
- subagent_type: Agent name
- description: Concise task description (3-5 words)
- prompt: Specific instructions
Call Example (requirement-analyzer)
- subagent_type: "requirement-analyzer"
- description: "Requirement analysis"
- prompt: "Requirements: [user requirements] Please perform requirement analysis and scale determination"
Call Example (task-executor)
- subagent_type: "task-executor"
- description: "Task execution"
- prompt: "Task file: docs/plans/tasks/[plan-name]/task-01.md Please complete the implementation"
Structured Response Specification
Each subagent responds in JSON format. Key fields for orchestrator decisions:
Shared Agent Responses
- requirement-analyzer: scale, confidence, fileCount, requiredDocuments (gdd, adr, designDoc, workPlan), scopeDependencies, questions, developmentMode, scenario
- task-executor: status, filesModified, testsAdded, readyForQualityCheck
- quality-fixer: status, checksPerformed, fixesApplied, approved
- document-reviewer: status, decision, revision_agent, issues, approvalReady
- design-sync: sync_status, total_conflicts, conflicts (severity, type, source_file, target_file)
- integration-test-reviewer: status (approved/needs_revision/blocked), qualityIssues, requiredFixes, verdict
- acceptance-test-generator: status, generatedFiles, budgetUsage
- expert-analyst: aspect, expertName, codeInvestigation, concerns, options, recommendation, risks, interactionPoints
- codebase-scanner: status, items (id, name, category, suspicionLevel, files, signals, evidence), scanMetrics
- cleanup-executor: status, branchName, filesRemoved, importsUpdated, revertedItems, buildVerified, testsVerified
- security-reviewer: status (approved/approved_with_notes/needs_revision/blocked), findings, designDocCoverage, blockers, nextSteps
Gamedev Agent Responses
- market-analyst: status, recommendation (go/no-go/conditional), marketSize, competitors, risks, targetAudience, monetizationPotential
- sr-game-designer: status, gddPath, coreMechanics, gamePillars, progressionSystems, contentSpecifications
- mid-game-designer: status, featureSpecs[], userStories[], acceptanceCriteria[], balancingParameters
- mechanics-developer: status, architecturePath, stateMachines[], physicsSystems[], eventSystem, objectPooling
- game-feel-developer: status, feedbackSystems[], screenShake, particles, audioCues, tweenDefinitions
- sr-game-artist: status, artDirectionPath, styleGuide, colorPalette, referenceSheets
- technical-artist: status, pipelineSpecs, atlasConfig, shaderRequirements, assetFormats
- ui-ux-agent: status, hudDesign, menuFlow, interactionPatterns, accessibilitySpecs
- data-scientist: status, eventSchema, kpiDefinitions, abTestDesigns, dashboardSpecs
- qa-agent: status, testPlan, performanceTargets, playtestProtocol, frameBudget
- gamedev-work-planner: status, planPath, phases[6], taskCount, dependencies
- game-researcher: status, mode (research/synthesis), gameName, gameSlug, depthLevel, sourcesCollected, dataQuality (verified/inferred/estimated/speculative counts), outputFiles, missingData, sourceCodeAnalyzed
Scenario Detection Logic
How requirement-analyzer detects scenario
scenarioDetection:
checkProjectConfig: "Does project-config.json exist in docs/?"
checkGDD: "Does docs/game-design/*-gdd.md exist?"
rules:
- condition: "No project-config.json AND no GDD"
scenario: "A (New Project)"
flow: "Large Scale — Scenario A"
- condition: "project-config.json exists AND GDD exists"
scenario: "B (Existing Project)"
flow: "Large Scale — Scenario B or Medium/Small based on file count"
- condition: "project-config.json exists AND no GDD"
scenario: "B (Existing Project, pre-GDD)"
flow: "Consider creating GDD first"
Development Mode Routing
Mode detected by requirement-analyzer based on user input:
| Mode | Description | Flow Modification |
|---|---|---|
| Full Development | Standard scale-based flow | No modification |
| Design Only | Execute through art direction (steps 1-11 in Scenario A), stop | Deliver design package, no implementation |
| Prototype | requirement-analyzer → sr-game-designer (core loop only) → simplified plan → mechanics-focused execution | Skip market analysis, art, analytics |
Handling Requirement Changes
Handling Requirement Changes in requirement-analyzer
requirement-analyzer follows the "completely self-contained" principle and processes requirement changes as new input.
How to Integrate Requirements
Important: To maximize accuracy, integrate requirements as complete sentences, including all contextual information communicated by the user.
Integration example:
Initial: "I want to create a platformer game"
Addition: "It should also have a level editor"
Result: "I want to create a platformer game. It should also have a level editor.
Initial requirement: I want to create a platformer game
Additional requirement: It should also have a level editor"
Update Mode for Document Generation Agents
Document generation agents (gamedev-work-planner, technical-designer, sr-game-designer, mid-game-designer, market-analyst) can update existing documents in update mode.
- Initial creation: Create new document in create (default) mode
- On requirement change: Edit existing document and add history in update mode
Criteria for timing when to call each agent:
- gamedev-work-planner: Request updates only before execution
- technical-designer: Request updates according to design changes → Execute document-reviewer for consistency check
- sr-game-designer: Request updates according to game design changes → Execute document-reviewer for consistency check
- mid-game-designer: Request updates according to feature specification changes
- market-analyst: Request updates according to market/scope changes
- document-reviewer: Always execute before user approval after GDD/ADR/Design Doc creation/update
Basic Flow for Work Planning
When receiving new features or change requests, start with requirement-analyzer. According to scale determination and scenario detection:
Large Scale — Scenario A: New Project
- requirement-analyzer → Requirements + project discovery [Stop: Requirements + Development Mode selection]
- market-analyst → Market analysis, competitor research, Go/No-Go [Stop: Market Analysis Go/No-Go]
- producer-agent → Project config, team selection, resource plan
- sr-game-designer → GDD (vision, pillars, core loop, progression)
- document-reviewer → GDD review [Stop: GDD Approval]
- mid-game-designer → Feature specifications from GDD systems
- mechanics-developer → Game mechanics architecture
- game-feel-developer → Game feel specification
- sr-game-artist → Art direction, style guide, color palette technical-artist → Pipeline specs, atlas optimization, shader requirements
- ui-ux-agent → Game UI/UX: HUD, menus, interaction, accessibility
- data-scientist → Analytics/telemetry design, KPIs, A/B tests
- [Conditional] technical-designer → ADR (if architecture changes) → document-reviewer [Stop: ADR Approval]
- technical-designer → Design Doc (integrates: GDD, mechanics, game feel, art, UI, analytics)
- document-reviewer → Design Doc review
- design-sync → Consistency across all documents [Stop: Design Doc Approval]
- acceptance-test-generator → Test skeletons
- gamedev-work-planner → Work plan with 6 game phases [Stop: Batch approval for entire implementation phase]
- Start autonomous execution mode: task-decomposer → (task-executor → quality-fixer → commit) loop
- security-reviewer → Security compliance review (if
blocked→ halt; ifneeds_revision→ create fix tasks) - Completion report
Large Scale — Scenario B: Large Feature (existing project/GDD)
- requirement-analyzer → Requirements analysis [Stop: Requirements confirmation]
- sr-game-designer → GDD update for new feature
- document-reviewer → GDD review [Stop: GDD Approval]
- mechanics-developer → Mechanics architecture for feature
- [Conditional] game-feel-developer, sr-game-artist, ui-ux-agent, data-scientist (based on feature type)
- [Conditional] technical-designer → ADR (if architecture changes)
- technical-designer → Design Doc
- document-reviewer → Design Doc review
- design-sync → Consistency across all documents [Stop: Design Doc Approval]
- acceptance-test-generator → Test skeletons
- gamedev-work-planner → Work plan [Stop: Batch approval for entire implementation phase]
- Start autonomous execution mode: task-decomposer → (task-executor → quality-fixer → commit) loop
- security-reviewer → Security compliance review (if
blocked→ halt; ifneeds_revision→ create fix tasks) - Completion report
Medium Scale (3-5 Files)
- requirement-analyzer → Requirement analysis [Stop: Requirements confirmation]
- [Conditional] sr-game-designer → Game design spec (if new mechanics) mid-game-designer → Feature specification
- mechanics-developer → Mechanics architecture (if new systems)
- [Conditional] ui-ux-agent (if UI), game-feel-developer (if polish)
- technical-designer → Design Doc
- document-reviewer → Design Doc review
- design-sync → Consistency across all documents [Stop: Design Doc Approval]
- acceptance-test-generator → Test skeletons
- gamedev-work-planner → Work plan [Stop: Batch approval for entire implementation phase]
- Start autonomous execution mode: task-decomposer → (task-executor → quality-fixer → commit) loop
- security-reviewer → Security compliance review (if
blocked→ halt; ifneeds_revision→ create fix tasks) - Completion report
Small Scale (1-2 Files)
- Create simplified plan [Stop: Batch approval for entire implementation phase]
- Start autonomous execution mode: Direct implementation → Completion report
Gamedev Orchestration Principles
Agent Hierarchy
- producer-agent has authority over project scope and timeline
- sr-game-designer owns game vision (GDD is first-class artifact)
- mechanics-developer and game-feel-developer coordinate on systems + feedback
- sr-game-artist + technical-artist own the art pipeline
- data-scientist provides data-driven insights to all agents
GDD as First-Class Artifact
- GDD is the single source of truth for game design
- All agents must reference GDD for design decisions
- GDD changes require document-reviewer approval
- Feature specs (mid-game-designer) derive from GDD systems
Art Pipeline Integration
- sr-game-artist defines what (style, concepts)
- technical-artist defines how (formats, atlases, shaders, optimization)
- Both must be consulted before art-related implementation tasks
Autonomous Execution Mode
Pre-Execution Environment Check
Principle: Verify subagents can complete their responsibilities
Required environments:
- Commit capability (for per-task commit cycle)
- Quality check tools (quality-fixer will detect and escalate if missing)
- Test runner (task-executor will detect and escalate if missing)
If critical environment unavailable: Escalate with specific missing component before entering autonomous mode If detectable by subagent: Proceed (subagent will escalate with detailed context)
Authority Delegation
After environment check passes:
- Batch approval for entire implementation phase delegates authority to subagents
- task-executor: Implementation authority (can use Edit/Write)
- quality-fixer: Fix authority (automatic quality error fixes)
Definition of Autonomous Execution Mode
After "batch approval for entire implementation phase" with gamedev-work-planner, autonomously execute the following processes without human approval:
graph TD
START[Batch approval for entire implementation phase] --> AUTO[Start autonomous execution mode]
AUTO --> TD[task-decomposer: Task decomposition]
TD --> LOOP[Task execution loop]
LOOP --> TE[task-executor: Implementation]
TE --> ESCJUDGE{Escalation judgment}
ESCJUDGE -->|escalation_needed/blocked| USERESC[Escalate to user]
ESCJUDGE -->|testsAdded has int/e2e| ITR[integration-test-reviewer]
ESCJUDGE -->|No issues| QF
ITR -->|needs_revision| TE
ITR -->|approved| QF
QF[quality-fixer: Quality check and fixes] --> COMMIT[Orchestrator: Execute git commit]
COMMIT --> CHECK{Any remaining tasks?}
CHECK -->|Yes| LOOP
CHECK -->|No| REPORT[Completion report]
LOOP --> INTERRUPT{User input?}
INTERRUPT -->|None| TE
INTERRUPT -->|Yes| REQCHECK{Requirement change check}
REQCHECK -->|No change| TE
REQCHECK -->|Change| STOP[Stop autonomous execution]
STOP --> RA[Re-analyze with requirement-analyzer]
Conditions for Stopping Autonomous Execution
Stop autonomous execution and escalate to user in the following cases:
-
Escalation from subagent
- When receiving response with
status: "escalation_needed" - When receiving response with
status: "blocked"
- When receiving response with
-
When requirement change detected
- Any match in requirement change detection checklist
- Stop autonomous execution and re-analyze with integrated requirements in requirement-analyzer
-
When gamedev-work-planner update restriction is violated
- Requirement changes after task-decomposer starts require overall redesign
- Restart entire flow from requirement-analyzer
-
When user explicitly stops
- Direct stop instruction or interruption
Quantitative Auto-Stop Triggers
The following numeric thresholds MUST trigger immediate orchestrator action. These are non-negotiable safety boundaries:
| Trigger Condition | Required Action |
|---|---|
| 5+ files changed in a single task | STOP immediately. Create impact report listing all changed files and affected modules. Present to user before continuing. |
| Same error occurs 3 times | STOP. Mandatory root cause analysis using 5 Whys technique. Do NOT attempt another fix without completing analysis. |
| 3 files edited without TodoWrite update | Force TodoWrite status update. Cannot proceed with next Edit until TodoWrite reflects current progress. |
| 2nd consecutive error fix attempt | Auto re-execute rule-advisor. Previous approach has failed — reassess task essence and strategy before continuing. |
| 5 cumulative Edit tool uses | Force creation of impact report. Document: files changed, modules affected, tests impacted. |
| 3 edits to the same file | STOP. Consider whether refactoring is needed instead of incremental patches. Present refactoring proposal to user. |
Auto-Stop Enforcement Rules
- Counters reset at the start of each new task
- Orchestrator MUST track edit counts per-file and cumulative
- Auto-stop triggers take priority over autonomous execution mode
- After any auto-stop, the orchestrator MUST present a status report before resuming
- User can explicitly override a stop with "continue" — but the stop MUST occur first
Error-Fixing Impulse Control Protocol
When an error is discovered during implementation, the orchestrator MUST follow this protocol instead of immediately attempting a fix:
Protocol Steps
- PAUSE — Do NOT attempt to fix the error immediately
- Re-execute rule-advisor — Reassess the task with the error context:
subagent_type: rule-advisor prompt: "Re-analyze task considering this error: [error details]. Determine if the original approach is still valid or if a different strategy is needed." - Root Cause Analysis — Apply 5 Whys technique:
Error: [observed error] Why 1: [immediate cause] Why 2: [cause of Why 1] Why 3: [cause of Why 2] Why 4: [cause of Why 3] Why 5: [root cause] - Present Action Plan — Show the user:
- Root cause identified
- Proposed fix approach
- Estimated impact (files to change)
- Risk assessment
- Fix ONLY after user approval — Execute the fix only when user confirms the action plan
When This Protocol Applies
- Any error that occurs during task-executor execution
- Build failures after code changes
- Test failures that weren't expected
- Quality-fixer reporting persistent issues
When This Protocol Does NOT Apply
- Expected test failures during Red-Green-Refactor (TDD red phase)
- Linting warnings that quality-fixer can auto-fix
- Known/documented environment issues
Metacognitive TodoWrite Integration
When rule-advisor returns its analysis, the orchestrator MUST formalize the metacognitive outputs into TodoWrite entries for tracking:
Mapping Rule-Advisor Output → TodoWrite
| Rule-Advisor Output Field | TodoWrite Usage |
|---|---|
metaCognitiveGuidance.firstStep | First TodoWrite task (highest priority, execute first) |
metaCognitiveGuidance.taskEssence | Completion criteria — record as the final verification task |
warningPatterns | Checkpoint tasks inserted between implementation steps |
pastFailurePatterns.countermeasures | Guard tasks — verify these are not violated during execution |
TodoWrite Structure After Rule-Advisor
1. [in_progress] {firstStep from rule-advisor}
2. [pending] Checkpoint: Verify {warningPattern[0]} not triggered
3. [pending] Implementation step 1
4. [pending] Checkpoint: Verify {warningPattern[1]} not triggered
5. [pending] Implementation step 2
...
N-1. [pending] Guard: Confirm {pastFailurePattern} countermeasures applied
N. [pending] Verify task essence: {taskEssence}
Rules
- Checkpoints are inserted between every 2-3 implementation steps
- Guard tasks reference specific
pastFailurePatternswith their countermeasures - The final task ALWAYS verifies
taskEssencefrom rule-advisor - If any checkpoint fails → trigger Error-Fixing Impulse Control Protocol
- TodoWrite updates MUST happen before and after each checkpoint evaluation
Commit Strategy Selection
Ask user at workflow start (after requirement-analyzer, before implementation):
| Strategy | When to Commit | Best For |
|---|---|---|
| per-task | After each task completes | Atomic commits, easy rollback, CI-friendly |
| per-phase | After each phase (Design, Implementation, etc.) | Balanced granularity |
| per-feature | Single commit at feature completion | Clean history, squash-like |
| manual | User explicitly requests | Full control, interactive workflow |
Default: per-task (recommended for autonomous mode)
Strategy affects:
- When
git commitis executed - How quality-fixer cycles are grouped
- Commit message granularity
Task Management: 3-Step Cycle
Per-task cycle:
1. task-executor → Implementation
2. Escalation judgment/Follow-up → Check task-executor status
3. quality-fixer → Quality check and fixes
4. [Conditional] git commit → Based on commit strategy
Step 2 Execution Details:
status: escalation_neededorstatus: blocked→ Escalate to usertestsAddedcontains*.int.test.tsor*.e2e.test.ts→ Execute integration-test-reviewer- If verdict is
needs_revision→ Return to task-executor withrequiredFixes - If verdict is
approved→ Proceed to quality-fixer
- If verdict is
Commit execution by strategy:
| Strategy | Commit Trigger |
|---|---|
| per-task | quality-fixer returns approved: true → Commit immediately |
| per-phase | All tasks in phase complete + quality-fixer approved: true → Commit |
| per-feature | All phases complete + final quality-fixer approved: true → Single commit |
| manual | User says "commit" or "save progress" → Commit staged changes |
Note: quality-fixer MUST still run after each task regardless of commit strategy
2-Stage TodoWrite Management
Stage 1: Phase Management (Orchestrator responsibility)
- Register overall phases as TodoWrite items
- Update status as each phase completes
Stage 2: Task Expansion (Subagent responsibility)
- Each subagent registers detailed steps in TodoWrite at execution start
- Update status on each step completion
Main Orchestrator Roles
-
State Management: Grasp current phase, each subagent's state, and next action
-
Information Bridging: Data conversion and transmission between subagents
- Convert each subagent's output to next subagent's input format
- Always pass deliverables from previous process to next agent
- Extract necessary information from structured responses
- Compose commit messages from changeSummary → Execute git commit with Bash
- Explicitly integrate initial and additional requirements when requirements change
*1 acceptance-test-generator → gamedev-work-planner
Purpose: Prepare information for gamedev-work-planner to incorporate into work plan
Orchestrator verification items:
- Verify integration test file path retrieval and existence
- Verify E2E test file path retrieval and existence
Pass to gamedev-work-planner:
- Integration test file: [path] (create and execute simultaneously with each phase implementation)
- E2E test file: [path] (execute only in final phase)
On error: Escalate to user if files are not generated
-
Quality Assurance and Commit Execution: After confirming approved=true, immediately execute git commit
-
Autonomous Execution Mode Management: Start/stop autonomous execution after approval, escalation decisions
-
ADR Status Management: Update ADR status after user decision (Accepted/Rejected)
Important Constraints
- Quality check is mandatory: quality-fixer approval needed before commit
- Structured response mandatory: Information transmission between subagents in JSON format
- Approval management: Document creation → Execute document-reviewer → Get user approval before proceeding
- Flow confirmation: After getting approval, always check next step with work planning flow (large/medium/small scale)
- Consistency verification: If subagent determinations contradict, prioritize guidelines
- GDD authority: GDD is the single source of truth for game design — all design decisions must reference it
Required Dialogue Points with Humans
Basic Principles
- Stopping is mandatory: Always wait for human response at the following timings
- Confirmation → Agreement cycle: After document generation, proceed to next step after agreement or fix instructions in update mode
- Specific questions: Make decisions easy with options (A/B/C) or comparison tables
- Development Mode selection: Present mode options (Full Development / Design Only / Prototype) at requirements stop point
Action Checklist
When receiving a task, check the following:
- Confirmed if there is an orchestrator instruction
- Determined task type (new feature/fix/research, etc.)
- Detected scenario (A: New Project / B: Existing Project)
- Detected Development Mode (Full Development / Design Only / Prototype)
- Considered appropriate subagent utilization
- Decided next action according to decision flow
- Monitored requirement changes and errors during autonomous execution mode
Similar Skills
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.