Execute all plan-reviewed phases autonomously with review loops, decision logging, and final implementation review
From beenpx claudepluginhub george-popescu/bee-dev --plugin bee/shipRuns pre-launch checklist for code quality, security, performance, accessibility, infrastructure, and documentation. Reports failures, helps resolve them, and defines rollback plan.
/shipExecutes universal 8-phase shipping workflow for code, content, marketing, sales, research, or any project. Supports dry-run, auto-approve, force, rollback, monitor, type/target overrides, checklist-only, and iterations flags.
/shipAutomates full PR workflow: commit, create PR, monitor CI/review, merge, deploy to production, validate. Supports --strategy, --skip-tests, --dry-run, --state-file.
/shipExecutes full feature shipping workflow: verifies readiness, runs checks, deploys to staging then production, verifies, and announces. Outputs formatted status summary.
/shipAutonomously turns brain dump into reviewable PR: parses intent, quick-scopes, implements via TDD tasks, commits, rebases, opens PR.
/shipShips branch by creating PR, updating documentation, and merging to target (main/develop) with human approval. Supports --dry-run, --skip-docs, --squash/--preserve.
Read these files using the Read tool:
.bee/STATE.md — if not found: NOT_INITIALIZED.bee/config.json — if not found: use {}.bee/PROJECT.md — if not found: skip (project index not available)Use Glob to find .bee/specs/*/spec.md, .bee/specs/*/requirements.md, and .bee/specs/*/phases.md, then Read each:
You are running /bee:ship -- the autonomous orchestrator that executes all plan-reviewed phases, reviews each phase's implementation, logs every decision, runs a final implementation review, and presents results at completion. This command is fully autonomous during its pipeline (no AskUserQuestion during execution/review). Follow these steps in order.
Check these guards in order. Stop immediately if any fails:
NOT_INITIALIZED guard: If the dynamic context above contains "NOT_INITIALIZED" (meaning .bee/STATE.md does not exist), tell the user:
"BeeDev is not initialized. Run /bee:init first."
Do NOT proceed.
NO_SPEC guard: If the dynamic context above contains "NO_SPEC" (meaning no spec.md exists), tell the user:
"No spec found. Run /bee:new-spec first to create a specification."
Do NOT proceed.
NO_PHASES guard: If the dynamic context above contains "NO_PHASES" (meaning no phases.md exists), tell the user:
"No phases found. Run /bee:new-spec first to create a spec with phases."
Do NOT proceed.
Phases needing work guard: Read the Phases table from STATE.md. At least one phase must need work. A phase needs work if its Status is one of:
PLAN_REVIEWED -- ready for executionEXECUTING -- execution in progress (resume)EXECUTED -- executed but not yet reviewed (skip to review)REVIEWING -- review in progress (resume review)If NO phases match any of these statuses, tell the user:
"No phases need shipping. All phases are either not yet planned (run /bee:plan-all first) or already reviewed/tested/committed."
Do NOT proceed.
Read the Phases table from STATE.md. Extract all phase rows: phase number, phase name, Status, Plan column, Plan Review column, Executed column, Reviewed column.
Read phases.md from the Spec Context above to get full phase names and descriptions.
Read config.ship.max_review_iterations from config.json (default: 3). Store as $MAX_REVIEW_ITERATIONS.
Read config.ship.final_review from config.json (default: true). Store as $FINAL_REVIEW_ENABLED.
Read config.implementation_mode from config.json (defaults to "quality" if absent). Store as $IMPLEMENTATION_MODE.
Build a work list of phases in phase order (ascending by phase number). For each phase, classify its state:
REVIEWED, TESTED, or COMMITTED -- fully skip this phasePLAN_REVIEWED -- needs the full execution-then-review pipelineEXECUTING -- resume execution from pending wave, then reviewEXECUTED -- skip execution, go directly to reviewREVIEWING -- skip execution, resume reviewDisplay the discovery summary:
Ship: {total} phases discovered.
{For each phase:}
- Phase {N}: {name} -- {needs_execution | resume_execution | needs_review | resume_review | skip}
Process each phase in phase order (Phase 1 first, then Phase 2, etc.). For each phase that is NOT classified as "skip":
3a. Phase Execution (for needs_execution and resume_execution phases)
Skip this step if the phase is classified as "needs_review" or "resume_review" (execution already complete).
Execute the phase using the execute-phase pipeline (Steps 2-5 from execute-phase.md). The key difference from the interactive execute-phase command: ship does NOT use AskUserQuestion at any point during execution. No wave completion menus, no failure interaction menus.
3a.1: Load TASKS.md
{spec-path}/phases/{NN}-*/TASKS.md where NN is the zero-padded phase number.3a.2: Parse Wave Structure and Detect Resume Point
## Wave N headers).[x] = completed (skip this task)[ ] = pending (needs execution)[FAILED] = previously failed (will get one retry)[ ] or [FAILED] tasks -- this is the resume point.[x] (all complete): update STATE.md Status to EXECUTED and skip to Step 3b.Display resume status:
3a.3: Update STATE.md to EXECUTING
.bee/STATE.md from disk (fresh read, not cached).EXECUTING.SPEC_CREATED, set it to IN_PROGRESS.Wave 0/{total_waves}./bee:ship3a.4: Execute Waves
For each wave starting from the resume point, repeat the following:
Build context packets for pending tasks in this wave:
For each pending [ ] or [FAILED] task in the current wave, assemble a context packet. The packet is the sole input the implementer agent receives -- it must be self-contained.
Include in each context packet:
Task identity: Task ID (e.g., T1.3) and full description line from TASKS.md
Acceptance criteria: The task's acceptance: field verbatim
Research notes: The task's research: field
Context file paths: The task's context: field -- list of file paths for the agent to read at runtime
Dependency notes (Wave 2+ only): Read the task's needs: field to find dependency task IDs. Look up each dependency task in TASKS.md and include its notes: section content.
For [FAILED] tasks: Include the previous failure reason from the task's notes: section: "Previous attempt failed. Reason: {failure_reason}. Address this issue before proceeding."
Stack skill instruction: Resolve the correct stack(s) for each task using the following logic:
Read config: Check .bee/config.json. If config.stacks exists, use it. If config.stacks is absent (v2 config), treat config.stack as a single-entry stacks array: [{ "name": config.stack, "path": "." }].
Single-stack fast path: If the stacks array has exactly one entry, skip path-overlap logic entirely. Use the original instruction: "Read .bee/config.json to find your stack, then read the matching stack skill at skills/stacks/{stack}/SKILL.md for framework conventions."
Multi-stack path overlap: When the stacks array has more than one entry, compare each stack's path value against the file paths listed in the task's context: and research: fields. A file matches a stack if the file path starts with (or is within) the stack's path value. A stack with path set to "." matches everything. Collect all stacks that have at least one matching file.
Build the instruction:
.bee/config.json for the stacks array. Read the stack skill at skills/stacks/{stack}/SKILL.md for each of these stacks: [{matched_stack1}, {matched_stack2}]."context: / research: files), include all stacks as a fallback.TDD instruction: "Follow TDD cycle: RED (write failing tests first), GREEN (minimal implementation to pass), REFACTOR (clean up with tests as safety net). Write structured Task Notes in your final message under a ## Task Notes heading."
Model tier resolution: Use $IMPLEMENTATION_MODE:
model: "sonnet"Live progress -- TaskCreate: After assembling context packets, call TaskCreate for each pending task in the wave. Use the task ID as the title and the full task description line as the body.
Spawn parallel implementer agents:
Agent resolution (stack-specific fallback): Before spawning each implementer, resolve whether a stack-specific implementer exists. Check if plugins/bee/agents/stacks/{stack.name}/implementer.md exists. If yes, use {stack.name}-implementer as the agent name. If no, fallback to the generic implementer agent.
Live progress -- TaskUpdate in-progress: Before spawning agents, call TaskUpdate to set ALL pending tasks in the wave to in-progress status.
Spawn ALL pending tasks in the current wave simultaneously using the Task tool. Each task becomes one parallel agent invocation.
CRITICAL: Spawn all agents in the wave at the same time using simultaneous Task tool calls. Do NOT wait for one agent to finish before spawning the next.
Collect results and handle outcomes per agent:
As each implementer agent completes, process its result:
On success (agent completed with task notes):
## Task Notes section)[x] in TASKS.md (match either [ ] for pending tasks or [FAILED] for retried tasks)notes: section in TASKS.mdOn failure (agent did not complete successfully):
[FAILED] in TASKS.mdnotes: sectionIMPORTANT: The conductor is the SOLE writer to TASKS.md. Agents report notes in their final message; the conductor extracts and writes them. This prevents parallel write conflicts.
IMPORTANT: Always re-read TASKS.md from disk before each write (Read-Modify-Write pattern).
After all agents in the wave complete:
.bee/STATE.md from disk.Wave {M}/{total_waves}./bee:shipIf any task in the wave was marked [FAILED]:
Proceed to next wave. Repeat until all waves are processed.
3a.5: Mark Phase as EXECUTED
After all waves complete:
.bee/STATE.md from disk.EXECUTEDYes/bee:shipDisplay: "Phase {N} executed. {completed} tasks complete, {failed} failed. Starting review..."
3b. Phase Review Loop (for all qualifying phases)
Run the autonomous review pipeline for this phase. Ship auto-fixes ALL finding categories (including STYLISTIC) without user interaction. This uses the review pipeline from review.md (Steps 3.5-8) but operates fully autonomously.
Initialize: $REVIEW_ITERATION = 1.
3b.1: Build & Test Gate (non-interactive)
Run the Build & Test Gate from review.md Step 3.5, but WITHOUT any AskUserQuestion:
Build check (automatic, per-stack):
For each stack in config.stacks, scoped to its path:
package.json for a build script within {stack.path}.Test check (automatic, per-stack -- no user prompt):
For each stack in config.stacks, resolve its test runner: read stacks[i].testRunner first, fall back to root config.testRunner if absent, then "none".
For each stack:
"none", display "Tests: {stack.name}: skipped (no test runner configured)" and continue.vitest: cd {stack.path} && npx vitest runjest: cd {stack.path} && npx jest --maxWorkers=autopest: cd {stack.path} && ./vendor/bin/pest --parallel3b.2: Context Cache (read once, pass to all review agents)
Before spawning any review agents, read these files once and include their content in every agent's context packet:
plugins/bee/skills/stacks/{stack}/SKILL.md.bee/CONTEXT.md.bee/false-positives.md.bee/user.mdPass these as part of the agent's prompt context -- agents should NOT re-read these files themselves.
3b.3: Extract False Positives
.bee/false-positives.md using the Read tool.EXCLUDE these documented false positives from your findings:
- FP-001: {summary} ({file}, {reason})
...
"No documented false positives."3b.4: Dependency Scan
Before spawning review agents, expand the file scope:
import/require/use statements to find its dependencies (files it imports).import/require any modified file to find its consumers (files that import it).{name}.test.{ext}, {name}.spec.{ext}, tests/{name}.{ext}, __tests__/{name}.{ext}.3b.5: Update STATE.md to REVIEWING
.bee/STATE.md from disk (fresh read).REVIEWING./bee:shipDisplay: "Phase {N}: Starting autonomous review (iteration {$REVIEW_ITERATION}/{$MAX_REVIEW_ITERATIONS})..."
3b.6: Spawn 4-Agent Review Pipeline
Build context packets for four review agents using the same multi-stack logic as review.md Step 4:
Agent resolution (stack-specific fallback): For each per-stack agent, check if a stack-specific variant exists at plugins/bee/agents/stacks/{stack.name}/{role}.md. If yes, use {stack.name}-{role} as the agent name. If no, fallback to generic bee:{role}.
Per-stack Agent: Bug Detector (one per stack)
You are reviewing Phase {N} implementation for bugs and security issues.
Spec: {spec.md path}
TASKS.md: {TASKS.md path}
Phase directory: {phase_directory}
Phase number: {N}
Stack: {stack.name}
{Context Cache content: stack skill, CONTEXT.md, user.md}
{false-positives list}
Read TASKS.md to find the files created/modified by this phase. Scope your file search to files within the `{stack.path}` directory. Review those files for bugs, logic errors, null handling issues, race conditions, edge cases, and security vulnerabilities (OWASP). If a project-level CLAUDE.md exists at the project root, read it for project-specific overrides (CLAUDE.md takes precedence over stack skill for project-specific conventions).
Apply the Review Quality Rules from the review skill: same-class completeness (scan ALL similar constructs when finding one bug), edge case enumeration (verify loop bounds, all checkbox states, null paths), and crash-path tracing (for each state write, trace what happens if the session crashes here).
Report only HIGH confidence findings in your standard output format.
{Dependency scan instruction}
Per-stack Agent: Pattern Reviewer (one per stack)
You are reviewing Phase {N} implementation for pattern deviations.
Spec: {spec.md path}
TASKS.md: {TASKS.md path}
Phase directory: {phase_directory}
Phase number: {N}
Stack: {stack.name}
{Context Cache content: stack skill, CONTEXT.md, user.md}
{false-positives list}
Read TASKS.md to find the files created/modified by this phase. Scope your file search to files within the `{stack.path}` directory. For each file, find 2-3 similar existing files in the codebase, extract their patterns, and compare. If a project-level CLAUDE.md exists at the project root, read it for project-specific overrides.
Apply same-class completeness: when you find a pattern deviation in one location, scan ALL similar constructs across the codebase for the same deviation. Report ALL instances, not just the first.
Report only HIGH confidence deviations in your standard output format.
{Dependency scan instruction}
Per-stack Agent: Stack Reviewer (one per stack)
You are reviewing Phase {N} implementation for stack best practice violations.
Spec: {spec.md path}
TASKS.md: {TASKS.md path}
Phase directory: {phase_directory}
Phase number: {N}
{Context Cache content: stack skill, CONTEXT.md, user.md}
{false-positives list}
The stack for this review pass is `{stack.name}`. Load the stack skill at `skills/stacks/{stack.name}/SKILL.md` and check all code within the `{stack.path}` directory against that stack's conventions. If a project-level CLAUDE.md exists at the project root, read it for project-specific overrides (CLAUDE.md takes precedence over stack skill). Use Context7 to verify framework best practices. Report only HIGH confidence violations in your standard output format.
{Dependency scan instruction}
Global Agent: Plan Compliance Reviewer (spawned ONCE globally)
Before building the packet, check if {spec-path}/requirements.md exists on disk. Set the requirements line:
Requirements: {spec-path}/requirements.mdRequirements: (not found -- skip requirement tracking)You are reviewing Phase {N} implementation in CODE REVIEW MODE (not plan review mode).
Spec: {spec.md path}
TASKS.md: {TASKS.md path}
Requirements: {spec-path}/requirements.md OR (not found -- skip requirement tracking)
Phase directory: {phase_directory}
Phase number: {N}
{Context Cache content: stack skill, CONTEXT.md, user.md}
{false-positives list}
Review mode: code review. Check implemented code against spec requirements and acceptance criteria. Verify every acceptance criterion in TASKS.md has corresponding implementation. Check for missing features, incorrect behavior, and over-scope additions. If phase > 1, also check cross-phase integration (imports, data contracts, workflow connections, shared state). If a project-level CLAUDE.md exists at the project root, read it for project-specific overrides. Report findings in your standard code review mode output format.
Spawn agents:
Economy mode ($IMPLEMENTATION_MODE: "economy"): Pass model: "sonnet" for all agents. Spawn agents sequentially per stack:
Quality or Premium mode: Spawn ALL agents via Task tool calls in a SINGLE message (parallel execution). Omit the model parameter for all agents (they inherit the parent model).
Wait for all agents to complete.
3b.7: Parse, Deduplicate, Write REVIEW.md
After all agents complete, consolidate findings using the same logic as review.md Steps 4.3-4.5:
Parse findings from each agent's final message:
Deduplicate and merge: For each pair of findings from different agents, check if they reference the same file AND their line ranges overlap (within 5 lines). If so, merge (keep higher severity, combine categories/descriptions, use broader line range).
Assign IDs and write REVIEW.md: Write {phase_directory}/REVIEW.md using the review-report template. Set iteration to {$REVIEW_ITERATION}, status to PENDING.
Count total findings. If 0 findings:
3b.8: Validate Findings
For each finding in REVIEW.md:
source_agent.finding-validator agent via Task tool. Model selection: economy passes model: "sonnet", quality/premium omits model.Escalate MEDIUM confidence classifications:
finding-validator agent for a second opinion (NOT the source specialist). Provide the original finding, the validator's uncertain classification, and request a second opinion.Handle FALSE POSITIVE findings:
.bee/false-positives.md does not exist, create it with a # False Positives header..bee/false-positives.md.Handle STYLISTIC findings (autonomous -- no user interaction): Ship auto-fixes ALL STYLISTIC findings. Add every STYLISTIC finding to the confirmed fix list. Log the decision:
Build confirmed fix list: all REAL BUG findings + all STYLISTIC findings. Exclude FALSE POSITIVE findings.
Update REVIEW.md with all classifications.
3b.9: Fix Confirmed Issues
Fixer Parallelization Strategy:
For each file group:
fixer agent via Task tool. Use the parent model (omit model parameter) -- fixers write production code.Display fix summary: "{fixed} fixed, {skipped} skipped, {failed} failed out of {total} confirmed findings."
3b.10: Re-Review Check
After fixing, check whether to re-review:
$REVIEW_ITERATION >= $MAX_REVIEW_ITERATIONS:
$REVIEW_ITERATION.{phase_directory}/REVIEW-{previous_iteration}.md..bee/false-positives.md may have new entries from this iteration).3c. Update STATE.md as REVIEWED
After the review loop completes for this phase:
.bee/STATE.md from disk (fresh read).Yes ({$REVIEW_ITERATION}) (the iteration that produced the clean or final review).REVIEWED./bee:ship3d. Inter-Phase Progress Summary
After each phase completes (execution + review), display a combined progress summary before moving to the next phase:
Phase {N} complete: {phase_name}
Execution: {completed_tasks}/{total_tasks} tasks ({failed_tasks} failed)
Review: {$REVIEW_ITERATION} iteration(s), {total_findings} findings ({fixed} fixed, {false_positives} FP, {unresolved} unresolved)
Overall progress: {phases_done}/{total_phases} phases shipped
3e. Proceed to Next Phase
Move to the next phase in phase order. Repeat from Step 3a.
After ALL qualifying phases have been individually executed and reviewed, run the final implementation review if enabled.
4a. Check final_review config
If $FINAL_REVIEW_ENABLED is false:
If $FINAL_REVIEW_ENABLED is true:
4b. Run review-implementation in Full Spec Mode
Run the review-implementation pipeline (Steps 2-7 from review-implementation.md) autonomously. This is a single-pass review covering all executed phases together.
Context Detection: Full spec mode applies (spec exists and phases have been executed).
Context Cache (read once, pass to all agents):
plugins/bee/skills/stacks/{stack}/SKILL.md.bee/CONTEXT.md.bee/false-positives.md.bee/user.mdExtract False Positives: Re-extract from .bee/false-positives.md (includes all FPs documented during per-phase reviews).
Dependency Scan: Expand file scope using the same logic as Step 3b.4, but across ALL executed phases.
Spawn review agents in Full Spec Mode:
Collect all executed phase directory paths (phases with status EXECUTED, REVIEWED, TESTED, or COMMITTED).
Build agent context packets following review-implementation.md Step 4.1:
Per-stack Agent: Bug Detector (full spec mode context)
You are reviewing the FULL PROJECT implementation for bugs and security issues. This is a project-scope review across all executed phases, not a single-phase review.
Spec: {spec.md path}
Executed phases:
- Phase {N}: {phase_directory_path}
...
Stack: {stack.name}
{Context Cache content}
{false-positives list}
For EACH executed phase, read its TASKS.md to find the files created/modified. Scope your file search to files within the `{stack.path}` directory. Review those files for bugs, logic errors, null handling issues, race conditions, edge cases, and security vulnerabilities (OWASP). If a project-level CLAUDE.md exists at the project root, read it for project-specific overrides.
Apply the Review Quality Rules from the review skill: same-class completeness (scan ALL similar constructs when finding one bug), edge case enumeration (verify loop bounds, all checkbox states, null paths), and crash-path tracing (for each state write, trace what happens if the session crashes here).
Report only HIGH confidence findings in your standard output format.
Per-stack Agent: Pattern Reviewer (full spec mode context)
You are reviewing the FULL PROJECT implementation for pattern deviations. This is a project-scope review across all executed phases, not a single-phase review.
Spec: {spec.md path}
Executed phases:
- Phase {N}: {phase_directory_path}
...
Stack: {stack.name}
{Context Cache content}
{false-positives list}
For EACH executed phase, read its TASKS.md to find the files created/modified. Scope your file search to files within the `{stack.path}` directory. For each file, find 2-3 similar existing files in the codebase, extract their patterns, and compare. If a project-level CLAUDE.md exists at the project root, read it for project-specific overrides.
Apply same-class completeness: when you find a pattern deviation in one location, scan ALL similar constructs across the codebase for the same deviation. Report ALL instances, not just the first.
Report only HIGH confidence deviations in your standard output format.
Per-stack Agent: Stack Reviewer (full spec mode context)
You are reviewing the FULL PROJECT implementation for stack best practice violations. This is a project-scope review across all executed phases, not a single-phase review.
Spec: {spec.md path}
Executed phases:
- Phase {N}: {phase_directory_path}
...
{Context Cache content}
{false-positives list}
The stack for this review pass is `{stack.name}`. For EACH executed phase, read its TASKS.md to find the files created/modified. Load the stack skill at `skills/stacks/{stack.name}/SKILL.md` and check all code within the `{stack.path}` directory against that stack's conventions. If a project-level CLAUDE.md exists at the project root, read it for project-specific overrides. Use Context7 to verify framework best practices. Report only HIGH confidence violations in your standard output format.
Global Agent: Plan Compliance Reviewer (full spec mode context)
You are reviewing the FULL PROJECT implementation in CODE REVIEW MODE (not plan review mode). This is a project-scope review across ALL executed phases.
Spec: {spec.md path}
Requirements: {spec-path}/requirements.md OR (not found -- skip requirement tracking)
Executed phases:
- Phase {N}: {phase_directory_path}
...
{Context Cache content}
{false-positives list}
Review mode: code review. Check implemented code against spec requirements and acceptance criteria across ALL executed phases. For EACH phase, read its TASKS.md and verify every acceptance criterion has corresponding implementation. Check for missing features, incorrect behavior, and over-scope additions. CRITICAL: Check cross-phase integration across ALL executed phases (not just adjacent phases) -- verify imports, data contracts, workflow connections, and shared state consistency between every pair of phases. If a project-level CLAUDE.md exists at the project root, read it for project-specific overrides. Report findings in your standard code review mode output format.
Global Agent: Audit Bug Detector (bee:audit-bug-detector) -- full spec mode only, spawned ONCE globally
You are tracing end-to-end feature flows across ALL executed phases to find bugs that category-specific reviewers miss.
Spec: {spec.md path}
Executed phases:
- Phase {N}: {phase_directory_path}
...
{Context Cache content}
{false-positives list}
Trace complete user flows from entry point to completion. For each flow:
1. Follow data from frontend to backend to database and back
2. Check that types, field names, and contracts match at every boundary
3. Verify error handling exists at every async boundary
4. Check that state transitions are complete (no missing status values)
5. Verify resume/crash recovery paths work end-to-end
Report bugs that span multiple files or phases -- the kind that single-file reviewers miss. Report only HIGH confidence findings in your standard output format.
Spawn agents: Spawn all agents (per-stack + plan-compliance-reviewer + audit-bug-detector) using the same economy/quality/premium mode logic as Step 3b.6. Total agents in full spec mode: (3 x N) + 2 where N is number of stacks. Wait for all agents to complete.
4c. Process Final Review Results
{spec-path}/REVIEW-IMPLEMENTATION.md using the same consolidation logic as Step 3b.7.4d. Update STATE.md
.bee/STATE.md from disk (fresh read)./bee:shipRead the final state from disk. Build and display the completion summary.
5a. Per-Phase Stats Table
For each phase that was processed, display:
Ship complete!
Per-phase summary:
- Phase {N}: {name}
Tasks: {completed}/{total} completed ({failed} failed)
Review: {review_iterations} iteration(s), {findings} findings ({fixed} fixed, {fp} FP, {unresolved} unresolved)
Status: {final_status}
{Repeat for each phase}
Final implementation review: {clean | {X} findings -- {Y} fixed, {Z} unresolved | skipped}
5b. Decision Log Presentation
Read the Decisions Log section from STATE.md. Present all decisions logged during this ship run:
Decisions made during ship:
{For each decision entry logged during this run:}
- [{type}]: {what}
Why: {why}
Alternative: {alternative}
If no decisions were logged: "No autonomous decisions were needed -- clean run."
5c. Final Review Result
If $FINAL_REVIEW_ENABLED was true:
5d. Exit Menu
Present the completion menu using AskUserQuestion:
AskUserQuestion(
question: "Ship complete. {X} phases shipped, {Y} tasks completed, {Z} decisions logged.",
options: ["Commit", "Re-review phase", "Custom"]
)
/bee:commit -- suggest the user commit the shipped changes/bee:review --phase {N} for a manual interactive re-reviewDesign Notes (do not display to user):
ship.max_review_iterations (NOT review.max_loop_iterations). These are deliberately separate settings: review.max_loop_iterations controls the interactive review command's loop behavior; ship.max_review_iterations controls the autonomous ship review loop. This separation allows users to configure different thresholds for interactive vs. autonomous review.