From atv-starter-kit
Generates structured implementation plans from feature descriptions or requirements, grounded in repo patterns and research. Deepens existing plans via interactive sub-agent review.
npx claudepluginhub all-the-vibes/atv-starterkit --plugin atv-starter-kitThis skill uses the workspace's default tool permissions.
**Note: The current year is 2026.** Use this when dating plans and searching for recent documentation.
Creates structured plans for multi-step tasks including software features, implementations, research, or projects. Deepens plans via interactive sub-agent reviews.
Creates structured plans for multi-step tasks like software features, research workflows, events, or study plans. Deepens existing plans with interactive sub-agent reviews.
Creates detailed technical implementation plans via interactive research, iteration, user questions, and autonomy modes (Autopilot, Critical, Verbose). Activated by /create-plan or planning requests.
Share bugs, ideas, or general feedback.
Note: The current year is 2026. Use this when dating plans and searching for recent documentation.
ce-brainstorm defines WHAT to build. ce-plan defines HOW to build it. ce-work executes the plan.
This workflow produces a durable implementation plan. It does not implement code, run tests, or learn from execution-time results. If the answer depends on changing code and seeing what happens, that belongs in ce-work, not here.
Use the platform's question tool when available. When asking the user a question, prefer the platform's blocking question tool if one exists (ask_user in Copilot CLI). Otherwise, present numbered options in chat and wait for the user's reply before proceeding.
Ask one question at a time. Prefer a concise single-select choice when natural options exist.
<feature_description> #$ARGUMENTS </feature_description>
If the feature description above is empty, ask the user: "What would you like to plan? Please describe the feature, bug fix, or improvement you have in mind."
Do not proceed until you have a clear planning input.
IMPORTANT: All file references in the plan document must use repo-relative paths (e.g., src/models/user.rb), never absolute paths (e.g., /Users/name/Code/project/src/models/user.rb). This applies everywhere — implementation unit file lists, pattern references, origin document links, and prose mentions. Absolute paths break portability across machines, worktrees, and teammates.
ce-brainstorm produced a requirements document, planning should build from it rather than re-inventing behavior.Every plan should contain:
A plan is ready when an implementer can start confidently without needing the plan to write the code for them.
If the user references an existing plan file or there is an obvious recent matching plan in docs/plans/:
Deepen intent: The word "deepen" (or "deepening") in reference to a plan is the primary trigger for the deepening fast path. When the user says "deepen the plan", "deepen my plan", "run a deepening pass", or similar, the target document is a plan in docs/plans/, not a requirements document. Use any path, keyword, or context the user provides to identify the right plan. If a path is provided, verify it is actually a plan document. If the match is not obvious, confirm with the user before proceeding.
Words like "strengthen", "confidence", "gaps", and "rigor" are NOT sufficient on their own to trigger deepening. These words appear in normal editing requests ("strengthen that section about the diagram", "there are gaps in the test scenarios") and should not cause a holistic deepening pass. Only treat them as deepening intent when the request clearly targets the plan as a whole and does not name a specific section or content area to change — and even then, prefer to confirm with the user before entering the deepening flow.
Once the plan is identified and appears complete (all major sections present, implementation units defined, status: active), short-circuit to Phase 5.3 (Confidence Check and Deepening) in interactive mode. This avoids re-running the full planning workflow and gives the user control over which findings are integrated.
Normal editing requests (e.g., "update the test scenarios", "add a new implementation unit", "strengthen the risk section") should NOT trigger the fast path — they follow the standard resume flow.
If the plan already has a deepened: YYYY-MM-DD frontmatter field and there is no explicit user request to re-deepen, the fast path still applies the same confidence-gap evaluation — it does not force deepening.
Before asking planning questions, search docs/brainstorms/ for files matching *-requirements.md.
Relevance criteria: A requirements document is relevant if:
If multiple source documents match, ask which one to use using the platform's blocking question tool when available (see Interaction Method). Otherwise, present numbered options in chat and wait for the user's reply before proceeding.
If a relevant requirements document exists:
(see origin: <source-path>)If no relevant requirements document exists, planning may proceed from the user's request directly.
If no relevant requirements document exists:
ce-brainstorm firstThe planning bootstrap should establish:
Keep this bootstrap brief. It exists to preserve direct-entry convenience, not to replace a full brainstorm.
If the bootstrap uncovers major unresolved product questions:
ce-brainstorm againIf the origin document contains Resolve Before Planning or similar blocking questions:
If true product blockers remain:
ce-brainstorm to resolve themClassify the work into one of these plan depths:
If depth is unclear, ask one targeted question and then continue.
Prepare a concise planning context summary (a paragraph or two) to pass as input to the research agents:
Run these agents in parallel:
Collect:
docs/solutions/Decide whether the plan should carry a lightweight execution posture signal.
Look for signals such as:
Execution target: external-delegate to implementation units that are pure code writingWhen the signal is clear, carry it forward silently in the relevant implementation units.
Ask the user only if the posture would materially change sequencing or risk and cannot be responsibly inferred.
Based on the origin document, user signals, and local findings, decide whether external research adds value.
Read between the lines. Pay attention to signals from the conversation so far:
Leverage repo-research-analyst's technology context:
The repo-research-analyst output includes a structured Technology & Infrastructure summary. Use it to make sharper external research decisions:
Always lean toward external research when:
Skip external research when:
Announce the decision briefly before continuing. Examples:
If Step 1.2 indicates external research is useful, run these agents in parallel:
Summarize:
If the current classification is Lightweight and Phase 1 research found that the work touches any of these external contract surfaces, reclassify to Standard:
.github/workflows/, Dockerfile, deployment scripts)This ensures flow analysis (Phase 1.5) runs and the confidence check (Phase 5.3) applies critical-section bonuses. Announce the reclassification briefly: "Reclassifying to Standard — this change touches [environment variables / exported APIs / CI config] with external consumers."
For Standard or Deep plans, or when user flow completeness is still unclear, run:
Use the output to:
Build a planning question list from:
For each question, decide whether it should be:
Ask the user only when the answer materially affects architecture, scope, sequencing, or risk and cannot be responsibly inferred. Use the platform's blocking question tool when available (see Interaction Method).
Do not run tests, build the app, or probe runtime behavior in this phase. The goal is a strong plan, not partial execution.
feat: Add user authentication or fix: Prevent checkout double-submitfeat, fix, or refactordocs/plans/YYYY-MM-DD-NNN-<type>-<descriptive-name>-plan.md
docs/plans/ if it does not exist2026-01-15-001-feat-user-authentication-flow-plan.md, 2026-02-03-002-fix-checkout-race-condition-plan.mdFor Standard or Deep plans, briefly consider who is affected by this change — end users, developers, operations, other teams — and how that should shape the plan. For cross-cutting work, note affected parties in the System-Wide Impact section.
Break the work into logical implementation units. Each unit should represent one meaningful change that an implementer could typically land as an atomic commit.
Good units are:
Avoid:
Before detailing implementation units, decide whether an overview would help a reviewer validate the intended approach. This section communicates the shape of the solution — how pieces fit together — without dictating implementation.
When to include it:
| Work involves... | Best overview form |
|---|---|
| DSL or API surface design | Pseudo-code grammar or contract sketch |
| Multi-component integration | Mermaid sequence or component diagram |
| Data pipeline or transformation | Data flow sketch |
| State-heavy lifecycle | State diagram |
| Complex branching logic | Flowchart |
| Mode/flag combinations or multi-input behavior | Decision matrix (inputs -> outcomes) |
| Single-component with non-obvious shape | Pseudo-code sketch |
When to skip it:
Choose the medium that fits the work. Do not default to pseudo-code when a diagram communicates better, and vice versa.
Frame every sketch with: "This illustrates the intended approach and is directional guidance for review, not implementation specification. The implementing agent should treat it as context, not code to reproduce."
Keep sketches concise — enough to validate direction, not enough to copy-paste into production.
For each unit, include:
Test expectation: none -- [reason] instead of leaving the field blank.
Every feature-bearing unit should include the test file path in **Files:**.
Use Execution note sparingly. Good uses include:
Execution note: Start with a failing integration test for the request/response contract.Execution note: Add characterization coverage before modifying this legacy parser.Execution note: Implement new domain behavior test-first.Execution note: Execution target: external-delegateDo not expand units into literal RED/GREEN/REFACTOR substeps.
If something is important but not knowable yet, record it explicitly under deferred implementation notes rather than pretending to resolve it in the plan.
Examples:
Use one planning philosophy across all depths. Change the amount of detail, not the boundary between planning and execution.
Lightweight
Standard
Deep
For sufficiently large, risky, or cross-cutting work, add the sections that genuinely help:
Do not add these as boilerplate. Include them only when they improve execution quality or stakeholder alignment.
Omit clearly inapplicable optional sections, especially for Lightweight plans.
---
title: [Plan Title]
type: [feat|fix|refactor]
status: active
date: YYYY-MM-DD
origin: docs/brainstorms/YYYY-MM-DD-<topic>-requirements.md # include when planning from a requirements doc
deepened: YYYY-MM-DD # optional, set when the confidence check substantively strengthens the plan
---
# [Plan Title]
## Overview
[What is changing and why]
## Problem Frame
[Summarize the user/business problem and context. Reference the origin doc when present.]
## Requirements Trace
- R1. [Requirement or success criterion this plan must satisfy]
- R2. [Requirement or success criterion this plan must satisfy]
## Scope Boundaries
- [Explicit non-goal or exclusion]
## Context & Research
### Relevant Code and Patterns
- [Existing file, class, component, or pattern to follow]
### Institutional Learnings
- [Relevant `docs/solutions/` insight]
### External References
- [Relevant external docs or best-practice source, if used]
## Key Technical Decisions
- [Decision]: [Rationale]
## Open Questions
### Resolved During Planning
- [Question]: [Resolution]
### Deferred to Implementation
- [Question or unknown]: [Why it is intentionally deferred]
<!-- Optional: Include this section only when the work involves DSL design, multi-component
integration, complex data flow, state-heavy lifecycle, or other cases where prose alone
would leave the approach shape ambiguous. Omit it entirely for well-patterned or
straightforward work. -->
## High-Level Technical Design
> *This illustrates the intended approach and is directional guidance for review, not implementation specification. The implementing agent should treat it as context, not code to reproduce.*
[Pseudo-code grammar, mermaid diagram, data flow sketch, or state diagram — choose the medium that best communicates the solution shape for this work.]
## Implementation Units
- [ ] **Unit 1: [Name]**
**Goal:** [What this unit accomplishes]
**Requirements:** [R1, R2]
**Dependencies:** [None / Unit 1 / external prerequisite]
**Files:**
- Create: `path/to/new_file`
- Modify: `path/to/existing_file`
- Test: `path/to/test_file`
**Approach:**
- [Key design or sequencing decision]
**Execution note:** [Optional test-first, characterization-first, external-delegate, or other execution posture signal]
**Technical design:** *(optional -- pseudo-code or diagram when the unit's approach is non-obvious. Directional guidance, not implementation specification.)*
**Patterns to follow:**
- [Existing file, class, or pattern]
**Test scenarios:**
<!-- Include only categories that apply to this unit. Omit categories that don't. For units with no behavioral change, use "Test expectation: none -- [reason]" instead of leaving this section blank. -->
- [Scenario: specific input/action -> expected outcome. Prefix with category — Happy path, Edge case, Error path, or Integration — to signal intent]
**Verification:**
- [Outcome that should hold when this unit is complete]
## System-Wide Impact
- **Interaction graph:** [What callbacks, middleware, observers, or entry points may be affected]
- **Error propagation:** [How failures should travel across layers]
- **State lifecycle risks:** [Partial-write, cache, duplicate, or cleanup concerns]
- **API surface parity:** [Other interfaces that may require the same change]
- **Integration coverage:** [Cross-layer scenarios unit tests alone will not prove]
- **Unchanged invariants:** [Existing APIs, interfaces, or behaviors that this plan explicitly does not change — and how the new work relates to them. Include when the change touches shared surfaces and reviewers need blast-radius assurance]
## Risks & Dependencies
| Risk | Mitigation |
|------|------------|
| [Meaningful risk] | [How it is addressed or accepted] |
## Documentation / Operational Notes
- [Docs, rollout, monitoring, or support impacts when relevant]
## Sources & References
- **Origin document:** [docs/brainstorms/YYYY-MM-DD-<topic>-requirements.md](path)
- Related code: [path or symbol]
- Related PRs/issues: #[number]
- External docs: [url]
For larger Deep plans, extend the core template only when useful with sections such as:
## Alternative Approaches Considered
- [Approach]: [Why rejected or not chosen]
## Success Metrics
- [How we will know this solved the intended problem]
## Dependencies / Prerequisites
- [Technical, organizational, or rollout dependency]
## Risk Analysis & Mitigation
| Risk | Likelihood | Impact | Mitigation |
|------|-----------|--------|------------|
| [Risk] | [Low/Med/High] | [Low/Med/High] | [How addressed] |
## Phased Delivery
### Phase 1
- [What lands first and why]
### Phase 2
- [What follows and why]
## Documentation Plan
- [Docs or runbooks to update]
## Operational / Rollout Notes
- [Monitoring, migration, feature flag, or rollout considerations]
/Users/name/Code/project/src/file.ts. Use src/file.ts instead. Absolute paths make plans non-portable across machines, worktrees, and teammates. When a plan targets a different repo than the document's home, state the target repo once at the top of the plan (e.g., **Target repo:** my-other-project) and use repo-relative paths throughout- [ ] syntax for progress trackingRED/GREEN/REFACTOR instructionsSection 3.4 covers diagrams about the solution being planned (pseudo-code, mermaid sequences, state diagrams). The existing Section 4.3 mermaid rule encourages those solution-design diagrams within Technical Design and per-unit fields. This guidance covers a different concern: visual aids that help readers navigate and comprehend the plan document itself -- dependency graphs, interaction diagrams, and comparison tables that make plan structure scannable.
Visual aids are conditional on content patterns, not on plan depth classification -- a Lightweight plan about a complex multi-unit workflow may warrant a dependency graph; a Deep plan about a straightforward feature may not.
When to include:
| Plan describes... | Visual aid | Placement |
|---|---|---|
| 4+ implementation units with non-linear dependencies (parallelism, diamonds, fan-in/fan-out) | Mermaid dependency graph | Before or after the Implementation Units heading |
| System-Wide Impact naming 3+ interacting surfaces or cross-layer effects | Mermaid interaction or component diagram | Within the System-Wide Impact section |
| Problem/Overview involving 3+ behavioral modes, states, or variants | Markdown comparison table | Within Overview or Problem Frame |
| Key Technical Decisions with 3+ interacting decisions, or Alternative Approaches with 3+ alternatives | Markdown comparison table | Within the relevant section |
When to skip:
Format selection:
TB (top-to-bottom) direction so diagrams stay narrow in both rendered and source form. Source should be readable as fallback in diff views and terminals.After generating a visual aid, verify it accurately represents the plan sections it illustrates -- correct dependency edges, no missing surfaces, no merged units.
Before finalizing, check:
ce-brainstormExecution noteTest expectation: none -- [reason] annotation is only valid for non-feature-bearing units (pure config, scaffolding, styling)If the plan originated from a requirements document, re-read that document and verify:
ce-brainstormREQUIRED: Write the plan file to disk before presenting any options.
Use the Write tool to save the complete plan to:
docs/plans/YYYY-MM-DD-NNN-<type>-<descriptive-name>-plan.md
Confirm:
Plan written to docs/plans/[filename]
Pipeline mode: If invoked from an automated workflow such as LFG, SLFG, or any disable-model-invocation context, skip interactive questions. Make the needed choices automatically and proceed to writing the plan.
After writing the plan file, automatically evaluate whether the plan needs strengthening.
Two deepening modes:
Interactive mode exists because on-demand deepening is a different user posture — the user already has a plan they are invested in and wants to be surgical about what changes. This applies whether the plan was generated by this skill, written by hand, or produced by another tool.
document-review and this confidence check are different:
document-review skill when the document needs clarity, simplification, completeness, or scope controlPipeline mode: This phase always runs in auto mode in pipeline/disable-model-invocation contexts. No user interaction needed.
Determine the plan depth from the document:
Build a risk profile. Treat these as high-risk signals:
If the plan already appears sufficiently grounded and the thin-grounding override does not apply, report "Confidence check passed — no sections need strengthening" and skip to Phase 5.3.8 (Document Review). Document-review always runs regardless of whether deepening was needed — the two tools catch different classes of issues.
Use a checklist-first, risk-weighted scoring pass.
For each section, compute:
Key Technical Decisions, Implementation Units, System-Wide Impact, Risks & Dependencies, or Open Questions in Standard or Deep plansTreat a section as a candidate if:
Choose only the top 2-5 sections by score. If deepening a lightweight plan (high-risk exception), cap at 1-2 sections.
If the plan already has a deepened: date:
Section Checklists:
Requirements Trace
Context & Research / Sources & References
Key Technical Decisions
Open Questions
High-Level Technical Design (when present)
High-Level Technical Design (when absent) (Standard or Deep plans only)
Implementation Units
Test expectation: none annotation is only valid for non-feature-bearing units)System-Wide Impact
Risks & Dependencies / Documentation / Operational Notes
Use the plan's own Context & Research and Sources & References as evidence. If those sections cite a pattern, learning, or risk that never affects decisions, implementation units, or verification, treat that as a confidence gap.
Before dispatching agents, report what sections are being strengthened and why:
Strengthening [section names] — [brief reason for each, e.g., "decision rationale is thin", "cross-boundary effects aren't mapped"]
For each selected section, choose the smallest useful agent set. Do not run every agent. Use at most 1-3 agents per section and usually no more than 8 agents total.
Use fully-qualified agent names inside Task calls.
Deterministic Section-to-Agent Mapping:
Requirements Trace / Open Questions classification
compound-engineering:workflow:spec-flow-analyzer for missing user flows, edge cases, and handoff gapscompound-engineering:research:repo-research-analyst (Scope: architecture, patterns) for repo-grounded patterns, conventions, and implementation reality checksContext & Research / Sources & References gaps
compound-engineering:research:learnings-researcher for institutional knowledge and past solved problemscompound-engineering:research:framework-docs-researcher for official framework or library behaviorcompound-engineering:research:best-practices-researcher for current external patterns and industry guidancecompound-engineering:research:git-history-analyzer only when historical rationale or prior art is materially missingKey Technical Decisions
compound-engineering:review:architecture-strategist for design integrity, boundaries, and architectural tradeoffscompound-engineering:research:framework-docs-researcher or compound-engineering:research:best-practices-researcher when the decision needs external grounding beyond repo evidenceHigh-Level Technical Design
compound-engineering:review:architecture-strategist for validating that the technical design accurately represents the intended approach and identifying gapscompound-engineering:research:repo-research-analyst (Scope: architecture, patterns) for grounding the technical design in existing repo patterns and conventionscompound-engineering:research:best-practices-researcher when the technical design involves a DSL, API surface, or pattern that benefits from external validationImplementation Units / Verification
compound-engineering:research:repo-research-analyst (Scope: patterns) for concrete file targets, patterns to follow, and repo-specific sequencing cluescompound-engineering:review:pattern-recognition-specialist for consistency, duplication risks, and alignment with existing patternscompound-engineering:workflow:spec-flow-analyzer when sequencing depends on user flow or handoff completenessSystem-Wide Impact
compound-engineering:review:architecture-strategist for cross-boundary effects, interface surfaces, and architectural knock-on impactcompound-engineering:review:performance-oracle for scalability, latency, throughput, and resource-risk analysiscompound-engineering:review:security-sentinel for auth, validation, exploit surfaces, and security boundary reviewcompound-engineering:review:data-integrity-guardian for migrations, persistent state safety, consistency, and data lifecycle risksRisks & Dependencies / Operational Notes
compound-engineering:review:security-sentinel for security, auth, privacy, and exploit riskcompound-engineering:review:data-integrity-guardian for persistent data safety, constraints, and transaction boundariescompound-engineering:review:data-migration-expert for migration realism, backfills, and production data transformation riskcompound-engineering:review:deployment-verification-agent for rollout checklists, rollback planning, and launch verificationcompound-engineering:review:performance-oracle for capacity, latency, and scaling concernsAgent Prompt Shape:
For each selected section, pass:
Instruct the agent to return:
Use the lightest mode that will work:
Signals that justify artifact-backed mode:
If artifact-backed mode is not clearly warranted, stay in direct mode.
Artifact-backed mode uses a per-run scratch directory under .context/compound-engineering/ce-plan/deepen/.
Launch the selected agents in parallel using the execution mode chosen above. If the current platform does not support parallel dispatch, run them sequentially instead.
Prefer local repo and institutional evidence first. Use external research only when the gap cannot be closed responsibly from repo context or already-cited sources.
If a selected section can be improved by reading the origin document more carefully, do that before dispatching external agents.
Direct mode: Have each selected agent return its findings directly to the parent. Keep the return payload focused: strongest findings only, the evidence or sources that matter, the concrete planning improvement implied by the finding.
Artifact-backed mode: For each selected agent, instruct it to write one compact artifact file in the scratch directory and return only a short completion summary. Each artifact should contain: target section, why selected, 3-7 findings, source-backed rationale, the specific plan change implied by each finding. No implementation code, no shell commands.
If an artifact is missing or clearly malformed, re-run that agent or fall back to direct-mode reasoning for that section.
If agent outputs conflict:
Skip this step in auto mode — proceed directly to 5.3.7.
In interactive mode, present each agent's findings to the user before integration. For each agent that returned findings:
If the user chooses "Discuss", engage in brief dialogue about the findings and then re-ask with only accept/reject (no discuss option on the second ask). The user makes a deliberate choice either way.
When presenting findings from multiple agents targeting the same section, present them one agent at a time so the user can make independent decisions. Do not merge findings from different agents before showing them.
After all agents have been reviewed, carry only the accepted findings forward to 5.3.7.
If the user accepted no findings, report "No findings accepted — plan unchanged." If artifact-backed mode was used, clean up the scratch directory before continuing. Then proceed directly to Phase 5.4 (skip document-review and synthesis — the plan was not modified). This interactive-mode-only skip does not apply in auto mode; auto mode always proceeds through 5.3.7 and 5.3.8.
If findings were accepted and the plan was modified, proceed through 5.3.7 and 5.3.8 as normal — document-review acts as a quality gate on the changes.
Strengthen only the selected sections. Keep the plan coherent and preserve its overall structure.
In interactive mode: Only integrate findings the user accepted in 5.3.6b. If some findings from different agents touch the same section, reconcile them coherently but do not reintroduce rejected findings.
Allowed changes:
Resolved During Planning and Deferred to Implementation when evidence supports the changedeepened: YYYY-MM-DD in frontmatter when the plan was substantively improvedDo not:
Research Insights subsections everywhereIf research reveals a product-level ambiguity that should change behavior or scope:
Open Questionsce-brainstorm if the gap is truly product-definingAfter the confidence check (and any deepening), run the document-review skill on the plan file. Pass the plan path as the argument. When this step is reached, it is mandatory — do not skip it because the confidence check already ran. The two tools catch different classes of issues.
The confidence check and document-review are complementary:
If document-review returns findings that were auto-applied, note them briefly when presenting handoff options. If residual P0/P1 findings were surfaced, mention them so the user can decide whether to address them before proceeding.
When document-review returns "Review complete", proceed to Final Checks.
Pipeline mode: If invoked from an automated workflow such as LFG, SLFG, or any disable-model-invocation context, run document-review with mode:headless and the plan path. Headless mode applies auto-fixes silently and returns structured findings without interactive prompts. Address any P0/P1 findings before returning control to the caller.
Before proceeding to post-generation options:
If artifact-backed mode was used:
Pipeline mode: If invoked from an automated workflow such as LFG, SLFG, or any disable-model-invocation context, skip the interactive menu below and return control to the caller immediately. The plan file has already been written, the confidence check has already run, and document-review has already run — the caller (e.g., lfg, slfg) determines the next step.
After document-review completes, present the options using the platform's blocking question tool when available (see Interaction Method). Otherwise present numbered options in chat and wait for the user's reply before proceeding.
Question: "Plan ready at docs/plans/YYYY-MM-DD-NNN-<type>-<name>-plan.md. What would you like to do next?"
Options:
/ce-work - Begin implementing this plan in the current environment (recommended)/ce-work in another session - Begin implementing in a separate agent session when the current platform supports itBased on selection:
docs/plans/<plan_filename>.md using the current platform's file-open or editor mechanism (e.g., open on macOS, xdg-open on Linux, or the IDE's file-open API)document-review skill with the plan path for another passCONTENT=$(cat docs/plans/<plan_filename>.md)
TITLE="Plan: <plan title from frontmatter>"
RESPONSE=$(curl -s -X POST https://www.proofeditor.ai/share/markdown \
-H "Content-Type: application/json" \
-d "$(jq -n --arg title "$TITLE" --arg markdown "$CONTENT" --arg by "ai:compound" '{title: $title, markdown: $markdown, by: $by}')")
PROOF_URL=$(echo "$RESPONSE" | jq -r '.tokenUrl')
Display View & collaborate in Proof: <PROOF_URL> if successful, then return to the options/ce-work → Call /ce-work with the plan path/ce-work in another session → If the current platform supports launching a separate agent session, start /ce-work with the plan path there. Otherwise, explain the limitation briefly and offer to run /ce-work in the current session instead.When the user selects "Create Issue", detect their project tracker from AGENTS.md:
Look for project_tracker: github or project_tracker: linear
If GitHub:
gh issue create --title "<type>: <title>" --body-file <plan_path>
If Linear:
linear issue create --title "<title>" --description "$(cat <plan_path>)"
If no tracker is configured:
AGENTS.md for future runsAfter issue creation:
/ce-workNEVER CODE! Research, decide, and write the plan.