npx claudepluginhub josstei/maestro-orchestrate --plugin maestroThis skill uses the workspace's default tool permissions.
**Standard workflow only.** If `task_complexity` is `simple` and workflow mode is Express, do not activate this skill. Simple tasks use the Express workflow, which does not activate design-dialogue. Return to the Express Workflow section.
Turns ideas into designs before coding features, components, or changes by exploring user intent, requirements, codebase, and alternatives via investigation agents.
Generates architecture/design documents from approved SRS docs when no prior design exists, proposing 2-3 approaches with trade-offs and securing section-by-section approval.
Collaboratively brainstorms architecture, approaches, and trade-offs from specs, proposes options, gets section-by-section approval, then writes design.md. Use for 'design this' or 'brainstorm approaches'.
Share bugs, ideas, or general feedback.
Standard workflow only. If task_complexity is simple and workflow mode is Express, do not activate this skill. Simple tasks use the Express workflow, which does not activate design-dialogue. Return to the Express Workflow section.
Activate this skill when beginning Phase 1 of Maestro orchestration. Immediately call EnterPlanMode to enter Plan Mode for the design phase. If the tool call fails or is unavailable, inform the user that Plan Mode is not enabled and provide activation instructions: "Plan Mode gives you a dedicated review surface for designs and plans. To enable it, run: gemini --settings and set experimental.plan to true, then restart this session." Ask the user if they want to pause and enable it, or continue without Plan Mode. If continuing without Plan Mode, use AskUserQuestion with type: 'yesno' for design approvals and type: 'choice' for approach selection. This skill provides the structured methodology for conducting design conversations that converge on approved architectural designs.
User confirmation sequence: Phase 1 entry triggers two user-facing confirmations — first the Skill consent dialog (required for non-builtin skills), then the EnterPlanMode transition. Both are expected; do not treat the second confirmation as redundant or skip it.
Before asking any design questions, present the user with a depth selector to control the level of reasoning rigour applied throughout the design phase. Use AskUserQuestion with type: 'choice' to offer three modes. Lead with Standard as the recommended default.
Modes:
Depth propagation: Remember the user's chosen depth mode and apply it consistently to all subsequent steps in this skill. The depth mode is not re-prompted — it is set once and carried forward. If the user's answer to the depth prompt is ambiguous, default to Standard.
Depth vs. complexity: Depth and complexity guidance (simple/medium/complex) are orthogonal. Complexity controls which sections appear and word count per section. Depth controls reasoning richness within each section. They compose independently — a user may select Deep depth on a Simple complexity task or Quick depth on a Complex task. Both are valid choices.
Frontmatter: Record the chosen depth in the design document frontmatter as design_depth: quick | standard | deep. Also record task_complexity: simple | medium | complex in the design document frontmatter after design_depth.
First-Turn Contract: On the first turn, Maestro presents the complexity classification result (classified per the complexity classification section in the orchestrator) and the depth selector with a complexity-informed recommendation. For simple tasks, auto-select Quick and inform the user: "This looks straightforward — using Quick depth. Say 'deeper' if you want more analysis." For medium tasks, recommend Standard. For complex tasks, recommend Standard or Deep. The first actual design question moves to the second turn.
Before you start narrowing the architecture for work that touches an existing codebase, decide whether the task is already grounded.
Use the built-in Agent (Explore) / Grep / Glob when any of the following are true:
Ask the investigator for:
Skip Agent (Explore) / Grep / Glob for greenfield tasks, documentation-only work, or scopes that are already well understood from direct file reads in the current turn.
Use the investigator's output to:
Ask questions in this order to progressively narrow the design space:
Problem Scope & Boundaries
Technical Constraints & Limitations
Technology Preferences
Quality Requirements
Deployment Context
Scale question coverage based on task_complexity:
Use AskUserQuestion with type: 'choice' for structured selections.
After the user answers each question, apply depth-gated enrichment steps before advancing to the next topic:
| Step | Quick | Standard | Deep |
|---|---|---|---|
| Accept answer and move on | Yes | Yes | Yes |
| Surface assumptions made from the answer | No | Yes | Yes |
| Ask user to confirm/correct assumptions | No | Yes | Yes |
| Probe implications with a follow-up question | No | No | Yes |
| Narrate trade-offs of the choice before moving on | No | No | Yes |
Quick mode: No enrichment steps. Accept the answer and proceed to the next question. Current behavior preserved.
Standard mode: After each user answer, state the assumptions you are making based on their response in 1-2 sentences, then ask the user to confirm or correct before proceeding. Example flow: question → answer → "Based on your answer, I'm assuming X and Y — correct?" → confirmation → next question.
Deep mode: After each user answer: (a) state and confirm assumptions as in Standard mode, (b) narrate the trade-offs of the choice in 1-2 sentences ("That choice means we gain A but give up B"), (c) if the answer has non-obvious implications (e.g., a technology choice that constrains future scaling options or creates a vendor lock-in dependency), ask one follow-up probing question before moving to the next topic. Cap at one follow-up per question.
Adaptive elision: If the user's answer is concrete, specific, and requires no inference (e.g., "What language?" → "TypeScript, same as the rest of the repo"), the assumption surfacing and trade-off narration steps may be skipped even in Deep mode. Only apply enrichment when there are genuine assumptions to surface or trade-offs to narrate. Do not elide when the answer implies unstated architectural trade-offs even if the answer itself is short (e.g., "REST" implies choices about state management, versioning, and contract evolution that are worth surfacing).
Present 2-3 architectural approaches after gathering sufficient requirements (typically after covering scope, constraints, and technology preferences).
If Agent (Explore) / Grep / Glob was used, present approaches only after incorporating its findings into the trade-off analysis. Do not treat the existing codebase structure as optional context.
For each approach, provide:
### Approach [N]: [Descriptive Name]
**Summary**: [2-3 sentence overview]
**Architecture**:
[Component diagram or description showing key components and their relationships]
**Pros**:
- [Concrete advantage with context]
- [Another advantage]
**Cons**:
- [Concrete disadvantage with context]
- [Another disadvantage]
**Best When**: [Specific conditions where this approach excels]
**Risk Level**: Low | Medium | High
In Standard and Deep modes, after presenting the 2-3 approaches with narrative pros/cons, also present a decision matrix that scores each approach against the gathered requirements. In Quick mode, skip the matrix.
Criteria derivation: Derive 3-6 scoring criteria from the requirements and constraints gathered during the question phase. Use the user's stated priorities to assign weights (sum to 100%). If the user has not explicitly stated priorities, infer relative weights from the emphasis given during the question phase; equal weighting is acceptable as a last resort. If fewer than 3 meaningful criteria emerge, skip the matrix and use narrative-only recommendation.
Scoring scale: Score each approach on each criterion using a 1-5 scale: 1=poor fit, 3=adequate, 5=strong fit. Include a brief justification (1 sentence) in each cell.
Matrix format:
| Criterion | Weight | Approach A | Approach B | Approach C (if applicable) |
|---|---|---|---|---|
| [Criterion from requirements] | [%] | [1-5]: [justification] | [1-5]: [justification] | [1-5]: [justification] |
| Weighted Total | [score] | [score] | [score] |
Tie-breaking: If approaches score within 1 point of each other in weighted totals, present the near-tie explicitly and use narrative judgment to break the tie, citing the single most decisive factor. Do not present a matrix-driven recommendation as definitive when the scores don't clearly differentiate.
Non-differentiating criteria: Criteria that score identically across all approaches may be noted but should be excluded from the matrix to keep it focused on differentiating factors. If removing non-differentiating criteria leaves fewer than 2 rows, skip the matrix and use narrative-only recommendation.
Present the design document in sections, validating each before proceeding. Scale the number of sections to the task's complexity, but always present at least the minimum set.
Minimum sections (always required, regardless of task complexity):
Full presentation order (use for medium-to-complex tasks; matches templates/design-document.md structure):
Complexity guidance:
Never skip Problem Statement, Approach, or Risk Assessment. If you believe other sections add no value for the task, omit them — but state which sections you are skipping and why before presenting the first section.
After each section, use AskUserQuestion with type: 'yesno' for approval. Do not rely on a separate assistant message for the section content. The question body itself must include the section title and the full section summary so the user can review the material directly in the approval prompt.
Apply depth-gated reasoning enrichment to design section content during the convergence phase:
| Element | Quick | Standard | Deep |
|---|---|---|---|
| Pros/cons on approaches | Yes | Yes | Yes |
| Recommendation narrative | Yes | Yes | Yes |
| Decision matrix scoring approaches | No | Yes | Yes |
| Rationale annotations on section decisions | No | Yes | Yes |
| Per-decision alternatives considered | No | No | Yes |
Requirement traceability (Traces To) | No | No | Yes |
Quick mode: No reasoning annotations. Present sections as-is — current behavior preserved.
Rationale annotations (Standard + Deep): For each key design decision within a section, include an inline explanation of why it was chosen, tied to specific project context from the question phase. A key decision is one that, if changed, would require reworking other parts of the design — routine or cosmetic choices (naming, formatting) are not key. Format: [decision] — *[rationale referencing specific requirements, constraints, or user-stated preferences]*
Per-decision alternatives (Deep only): For key sub-decisions (choices within a section that affect the design's shape), briefly note what was considered and rejected. Format: [decision] *(considered: [alternative A] — rejected because [reason]; [alternative B] — rejected because [reason])*
Requirement traceability (Deep only): Tag each key decision with Traces To: REQ-N referencing the numbered requirement it satisfies from the design document's Requirements section. Every requirement (functional and non-functional) should be traceable to at least one design decision. If the Requirements section was omitted due to complexity guidance (simple tasks), skip requirement traceability markers — rationale annotations and per-decision alternatives still apply.
Uniform application: Apply the chosen depth mode's reasoning rules uniformly to every section in the convergence phase. Do not selectively skip reasoning on some sections unless the adaptive elision rule applies (the decision is self-evident and requires no justification).
The write path depends on whether Plan Mode is active:
docs/maestro/plans/YYYY-MM-DD-<topic-slug>-design.md (the only writable location during Plan Mode). After ExitPlanMode approval in Phase 2, the orchestrator copies it to the permanent location.docs/maestro/plans/YYYY-MM-DD-<topic-slug>-design.md (docs/maestro resolves from MAESTRO_STATE_DIR).Where:
YYYY-MM-DD is the current date<topic-slug> is a lowercase, hyphenated summary of the task (e.g., user-auth-system, data-pipeline-refactor)<project> is the CLI's internal project hash (resolved automatically by Write)Use the design document template from templates/design-document.md. Include the design_depth field in the frontmatter, set to the depth mode chosen during the Design Depth Gate.
The design document is complete when:
After writing the design document: