From dso
Use when decomposing a ticket epic into prioritized user stories with measurable done definitions, or when auditing and reconciling existing epic children before implementation
npx claudepluginhub navapbc/digital-service-orchestra --plugin dso-devThis skill is limited to using the following tools:
<SUB-AGENT-GUARD>
docs/review-criteria.mddocs/reviewers/accessibility.mddocs/reviewers/maintainability.mddocs/reviewers/performance.mddocs/reviewers/reliability.mddocs/reviewers/security.mddocs/reviewers/testing.mdprompts/blue-team-review.mdprompts/red-team-review.mdprompts/ui-designer-dispatch-protocol.mdtests/test_adversarial_review_prompts.shProvides Ktor server patterns for routing DSL, plugins (auth, CORS, serialization), Koin DI, WebSockets, services, and testApplication testing.
Conducts multi-source web research with firecrawl and exa MCPs: searches, scrapes pages, synthesizes cited reports. For deep dives, competitive analysis, tech evaluations, or due diligence.
Provides demand forecasting, safety stock optimization, replenishment planning, and promotional lift estimation for multi-location retailers managing 300-800 SKUs.
"ERROR: /dso:preplanning cannot run in sub-agent context — it requires the Agent tool to dispatch its own sub-agents. Invoke this skill directly from the orchestrator instead."
Do NOT proceed with any skill logic if the Agent tool is unavailable.
At the very start of execution (immediately after passing the SUB-AGENT-GUARD check), emit the SKILL_ENTER breadcrumb:
_DSO_TRACE_SESSION_ID="${DSO_TRACE_SESSION_ID:-$(date +%s%N 2>/dev/null || date +%s)}"
_DSO_TRACE_SKILL_FILE="${CLAUDE_PLUGIN_ROOT}/skills/preplanning/SKILL.md"
_DSO_TRACE_FILE_SIZE=$(wc -c < "${_DSO_TRACE_SKILL_FILE}" 2>/dev/null || echo "null")
_DSO_TRACE_DEPTH="${DSO_TRACE_NESTING_DEPTH:-1}"
_DSO_TRACE_START_MS=$(date +%s%3N 2>/dev/null || echo "null")
_DSO_TRACE_SESSION_ORDINAL="${DSO_TRACE_SESSION_ORDINAL:-1}"
_DSO_TRACE_CUMULATIVE_BYTES="${DSO_TRACE_CUMULATIVE_BYTES:-null}"
echo "{\"type\":\"SKILL_ENTER\",\"timestamp\":\"$(date -u +%Y-%m-%dT%H:%M:%SZ 2>/dev/null)\",\"skill_name\":\"preplanning\",\"nesting_depth\":${_DSO_TRACE_DEPTH},\"skill_file_size\":${_DSO_TRACE_FILE_SIZE},\"tool_call_count\":null,\"elapsed_ms\":null,\"session_ordinal\":${_DSO_TRACE_SESSION_ORDINAL},\"cumulative_bytes\":${_DSO_TRACE_CUMULATIVE_BYTES},\"termination_directive\":null,\"user_interaction_count\":0}" >> "/tmp/dso-skill-trace-${_DSO_TRACE_SESSION_ID}.log" || true
Immediately after the SKILL_ENTER breadcrumb, read the interactive mode flag:
PLUGIN_SCRIPTS="${CLAUDE_PLUGIN_ROOT}/scripts"
PREPLANNING_INTERACTIVE=$(bash "$PLUGIN_SCRIPTS/read-config.sh" preplanning.interactive 2>/dev/null || echo 'true') # shim-exempt: internal orchestration script
PREPLANNING_INTERACTIVE=$(echo "$PREPLANNING_INTERACTIVE" | tr '[:upper:]' '[:lower:]')
# Default: true (interactive) when the key is absent or empty
if [[ -z "$PREPLANNING_INTERACTIVE" ]]; then
PREPLANNING_INTERACTIVE="true"
fi
Default is true (interactive) when the key is absent — new projects without the key should default to interactive mode. Only set preplanning.interactive=false in .claude/dso-config.conf for automated pipelines.
Act as a Senior Technical Product Manager (Google-style) to audit, reconcile, and decompose a ticket Epic into prioritized User Stories with measurable Done Definitions that bridge the epic's vision to task-level acceptance criteria.
Supports dryrun mode. Use /dso:dryrun /dso:preplanning to preview without changes.
/dso:preplanning # Interactive epic selection
/dso:preplanning <epic-id> # Pre-plan specific epic
/dso:preplanning <epic-id> --lightweight # Enrich epic without creating stories (used by /dso:sprint for MODERATE epics)
<epic-id> (optional): The ticket epic to decompose. If omitted, presents an interactive list of open epics.--lightweight (optional): Enrich the epic with done definitions and considerations without creating child stories. Returns ENRICHED or ESCALATED. Used by /dso:sprint for MODERATE-complexity epics. If the scope scan discovers COMPLEX qualitative overrides, returns ESCALATED so the orchestrator can re-invoke in full mode.This skill implements a five-phase process to transform epics into implementable stories:
Lightweight mode (--lightweight): Runs an abbreviated subset — Phase 1 Step 1, Phase 2 (abbreviated), and writes done definitions directly to the epic. Skips Phases 2.5, 3-4. Returns ENRICHED or ESCALATED.
Before proceeding, check if the epic has a scrutiny:pending tag:
.claude/scripts/dso ticket show <epic-id> and check the tags fieldscrutiny:pending is present in the tags array: HALT immediately. Output:
"This epic has not been through scrutiny review. Run /dso:brainstorm <epic-id> first to complete the scrutiny pipeline, then retry /dso:preplanning."
Do NOT produce any planning output.scrutiny:pending is NOT present (or tags field is empty/absent): proceed normally.This is a presence-based check — only block when the tag IS present. Existing epics without the tags field are NOT blocked.
Before proceeding, check if the epic has an interaction:deferred tag:
.claude/scripts/dso ticket show <epic-id> and check the tags fieldinteraction:deferred is present in the tags array: HALT immediately. Output:
"This epic has unresolved cross-epic interaction conflicts. Resolve or override them in /dso:brainstorm <epic-id> before proceeding to /dso:preplanning."
Do NOT produce any planning output.interaction:deferred is NOT present (or tags field is empty/absent): proceed normally.This is a presence-based check — only block when the tag IS present. Existing epics without the tags field are NOT blocked. If ticket show fails, treat the tag as absent and proceed (fail-open).
If <epic-id> was not provided:
Non-interactive gate (CP1): If PREPLANNING_INTERACTIVE=false and no <epic-id> argument was provided:
INTERACTIVITY_DEFERRED: preplanning.interactive=false — no epic-id provided. Invoke with /dso:preplanning <epic-id> to run non-interactively..claude/scripts/dso ticket list then filter results to epics only (filter JSON output where ticket_type == 'epic')Load the epic:
.claude/scripts/dso ticket show <epic-id>
Non-interactive default (CP2): If PREPLANNING_INTERACTIVE=false:
AskUserQuestion).{escalation_policy_label} = "Escalate when blocked" and {escalation_policy_text} = the full text for that label (see table in Phase 4 Step 2).Use AskUserQuestion to ask the user which escalation policy should apply to all stories in this epic. Skip this step in --lightweight mode.
Store the selected policy label and its full text as {escalation_policy_label} and {escalation_policy_text} for use in Phase 4 Step 2.
If --lightweight was passed: run Phase 1 Step 1 only, skip Step 1b, run abbreviated Phase 2, skip Phases 2.5 and 3-4, write done definitions to epic, return ENRICHED or ESCALATED per the Lightweight Mode Appendix below.
If --lightweight was NOT passed, continue to Phase 1 Step 2 as normal.
Gather all existing child items:
.claude/scripts/dso ticket deps <epic-id>
For each child, run .claude/scripts/dso ticket show <child-id> to read full details.
For each existing child:
completed → Keep as-is
in_progress → Review for reuse
pending → Fits new vision? Yes: Keep | No: Modify | Conflict: Delete
For each existing child, classify it:
Important: If boundaries are unclear or if existing tasks conflict with the new vision, pause and ask:
Non-interactive exit (CP3): If PREPLANNING_INTERACTIVE=false and scope clarification is required (boundaries are unclear or existing tasks conflict):
INTERACTIVITY_DEFERRED: preplanning.interactive=false — scope clarification required. Re-run /dso:preplanning <epic-id> interactively to resolve.Before creating new stories, present a reconciliation summary:
| Child ID | Title | Status | Recommendation | Rationale |
|---|---|---|---|---|
| xxx-123 | ... | pending | Reuse | Aligns with Epic criterion 1 |
| xxx-124 | ... | in_progress | Modify | Needs updated success criteria |
| xxx-125 | ... | pending | Delete | Redundant with new story approach |
Non-interactive auto-apply (CP4): If PREPLANNING_INTERACTIVE=false:
AskUserQuestion confirmation).in_progress. Skip those deletions and log a warning for each: "Skipping Delete for in_progress story <id> — manual review required."Use AskUserQuestion to get user approval before proceeding:
If the user requests changes, iterate on the reconciliation plan and re-present.
Scan all drafted stories (new and modified) as a batch to flag cross-cutting concerns that individual tasks would be too granular to catch. This is a lightweight analysis — no sub-agent dispatch, no scored review, no revision cycles.
| Area | Reviewer File | What to flag |
|---|---|---|
| Security | docs/reviewers/security.md | New endpoints, data exposure, auth boundaries |
| Performance | docs/reviewers/performance.md | Large data processing, new queries, batch operations |
| Accessibility | docs/reviewers/accessibility.md | New interactive pages, UI flows, form elements |
| Testing | docs/reviewers/testing.md | New LLM interactions, external integrations, complex state |
| Reliability | docs/reviewers/reliability.md | New failure points, external dependencies, data integrity |
| Maintainability | docs/reviewers/maintainability.md | Cross-cutting patterns, shared abstractions, documentation gaps |
Evaluate the full set of stories against all six areas. Examples of flags to raise:
Produce a Risk Register — a flat list of one-line flags, each referencing the affected story IDs:
| # | Area | Stories | Concern |
|---|------|---------|---------|
| 1 | Testing | X, Y | New LLM interaction — ensure mock-compatible interface |
| 2 | Performance | Y | Large file processing — consider timeout behavior |
| 3 | Accessibility | Z | New interactive page — WCAG 2.1 AA compliance |
Flags are added to the affected stories' descriptions as Considerations — context for /dso:implementation-plan to incorporate into task-level acceptance criteria. They are not hard requirements at the story level.
While scanning, flag stories where scope risk is high — stories where the minimum functional goal (walking skeleton) and the ideal implementation diverge significantly. Common indicators:
Mark these stories as split candidates. Phase 3 evaluates whether a Foundation/Enhancement split actually makes sense (see "Foundation/Enhancement Splitting" below).
After story decomposition and risk scanning, research integration capabilities for stories that involve external tools or services. This step surfaces verified constraints while the user is engaged and can redirect.
A story qualifies for integration research if it references any of:
For each qualifying story:
- [Integration] Verified: <tool> supports <capability> (source: <URL>)
- [Integration] NOT verified: <tool> does not appear to support <capability>
REPLAN_ESCALATE: brainstorm with explanation of the unresolved gap. Sprint's replan machinery routes this signal. Track the current iteration in feasibility_cycle_count (state variable exposed for planning-intelligence log consumption).If no stories in the plan qualify for integration research, log: "No stories with external integration signals — skipping integration research." and proceed to Phase 2.5.
Skip this phase if fewer than 3 stories exist after Phase 2 completes. Adversarial review adds value only when there are enough stories for cross-story interactions to matter. If skipped, log: "Adversarial review skipped: fewer than 3 stories (<N> stories)." and proceed directly to Phase 3.
Read agents/red-team-reviewer.md inline and dispatch as subagent_type: "general-purpose" with model: "opus". (dso:red-team-reviewer is an agent file identifier, NOT a valid subagent_type value — the Agent tool only accepts built-in types.) The agent definition contains the full review prompt including the 6-category taxonomy and Consumer Enumeration directive. Pass the following as task arguments:
{epic-title}: Epic title from Phase 1{epic-description}: Epic description from Phase 1{story-map}: All stories with their done definitions, considerations, and dependencies (formatted from Phase 2 output){risk-register}: Risk Register table from Phase 2{dependency-graph}: Dependency graph from .claude/scripts/dso ticket deps <epic-id>The red team sub-agent returns a JSON findings array. Parse the response and validate it contains well-formed JSON with the expected schema (array of objects with type, target_story_id, title, description, rationale, taxonomy_category fields).
Fallback — two-path protocol:
agents/red-team-reviewer.md inline and re-dispatch as a general-purpose agent using that content as the prompt. Do NOT perform the review inline — the agent must do it."Red team review failed: <reason>. Skipping adversarial review, proceeding to Phase 3." and skip directly to Phase 3.If the red team returns a non-empty findings array, read agents/blue-team-filter.md inline and dispatch as subagent_type: "general-purpose" with model: "sonnet". (dso:blue-team-filter is an agent file identifier, NOT a valid subagent_type value — the Agent tool only accepts built-in types.) Pass the following as task arguments:
{epic-title}: Same as red team{epic-description}: Same as red team{story-map}: Same as red team{red-team-findings}: The raw JSON findings array from the red team sub-agentThe blue team sub-agent returns a filtered JSON object with findings (accepted) and rejected arrays.
If red team returned zero findings: Skip the blue team dispatch entirely. Log: "Red team found no cross-story gaps. Skipping blue team filter." and proceed to Phase 3.
Partial failure — two-path protocol:
agents/blue-team-filter.md inline and re-dispatch as a general-purpose agent using that content as the prompt. Do NOT perform the filtering inline — the agent must do it; inline filtering by the orchestrator defeats the purpose of the impartial blue team."Blue team filter failed: <reason>. Discarding unfiltered red team findings, proceeding to Phase 3."Parse the blue team's accepted findings and apply each one based on its type:
| Finding Type | Action |
|---|---|
new_story | Create a new story with description: .claude/scripts/dso ticket create story "<title>" --parent=<epic-id> -d "<body with description, done definitions, and considerations>". |
modify_done_definition | Use .claude/scripts/dso ticket comment <target_story_id> "Done definition update: <description>" to record the modified done definition. |
add_dependency | Add the dependency: .claude/scripts/dso ticket link <target_story_id> <dependency_id> depends_on (extract dependency ID from the finding's description). |
add_consideration | Use .claude/scripts/dso ticket comment <target_story_id> "Consideration: <text>" to append the consideration. |
escalate_to_epic | The finding signals that a cross-story concern belongs at the epic level. Read the current epic description via ticket show, then use .claude/scripts/dso ticket edit <epic-id> --description="<current-description>\n\nSC: <title> — <description>" to append the new Success Criterion. Before emitting the escalation signal, check sprint.max_replan_cycles from config (default 2): if the current replan_cycle_count has already reached the limit, log "escalate_to_epic: max_replan_cycles reached — recording SC but skipping REPLAN_ESCALATE" and continue without escalating. Otherwise emit REPLAN_ESCALATE: brainstorm EXPLANATION:<title> to trigger brainstorm re-review of the updated epic scope before continuing. |
Log a summary after applying findings:
Adversarial review complete:
- Red team findings: <N> total
- Blue team filtered: <M> rejected, <K> accepted
- Applied: <A> new stories, <B> modified done definitions, <C> new dependencies, <D> new considerations
After processing blue team findings, persist the full exchange for post-mortem analysis:
artifact_path field. If present, it points to the persisted JSON file at $ARTIFACTS_DIR/adversarial-review-<epic-id>.jsonartifact_path is present, add a one-line ticket comment referencing the artifact:
.claude/scripts/dso ticket comment <epic-id> "Adversarial review: <N> findings, <M> accepted. Full exchange: <artifact_path>"
artifact_path is absent (agent failed to persist, or returned malformed output): log a warning "Adversarial review artifact not persisted — blue team agent did not return artifact_path" and continue. Artifact persistence failure is non-blocking.ticket show outputProceed to Phase 3 (Walking Skeleton & Vertical Slicing) with the updated story map. New stories from adversarial review are included in the walking skeleton analysis.
The Walking Skeleton is the absolute minimum end-to-end path required to prove the technical concept.
Ask: "What is the simplest possible flow that demonstrates this feature works?"
Prioritize these stories first - they unblock all downstream work.
Ensure each story follows INVEST principles:
| Principle | Question | Fix if No |
|---|---|---|
| Independent | Can this be built without waiting on other stories? | Add dependencies or split |
| Negotiable | Is the "how" flexible, not dictated? | Remove implementation details |
| Valuable | Does this deliver user/business value? | Combine with other stories |
| Estimable | Can an agent estimate effort? | Add more context |
| Small | Can this be completed in one sub-agent session? | Split into smaller stories |
| Testable | Are success criteria measurable? | Add specific acceptance criteria |
Focus on functional "slices" of value, not horizontal technical layers.
Good (vertical slice):
Bad (horizontal layer):
The vertical slice includes all layers necessary to deliver value.
For each story flagged as a split candidate in Phase 2, evaluate whether splitting delivers better outcomes than keeping it as a single story.
The question: "Does the minimum that delivers the functional goal differ significantly from the ideal experience or architecture?"
Split if:
Don't split if:
Examples:
| Story | Foundation | Enhancement |
|---|---|---|
| "User can review extracted rules" | Review page with approve/reject using existing table component | Custom review interface with inline editing, bulk actions, and keyboard shortcuts |
| "System stores extraction results" | Persist results in existing job table with JSON column | Dedicated results table with normalization, indexing, and query optimization |
| "User can export reviewed rules as Rego" | Download button that generates Rego file | Export wizard with format options, preview, and validation |
For each split:
.claude/scripts/dso ticket link <enhancement-id> <foundation-id> depends_onNote: dso:ui-designer has its own Pragmatic Scope Splitter (Phase 3 Step 10) that may trigger UI-specific splits during design. If preplanning already split a story, the design agent works within the Foundation story's scope. Enforcement: the splitRole guard in ui-designer-dispatch-protocol.md Section 5 enforces this precedence rule — agent scope_split_proposals are skipped entirely when a splitRole: Foundation or splitRole: Enhancement marker is detected on the story.
After Phase 3 completes story slicing and splitting, perform targeted research for stories where decomposition has revealed knowledge gaps. This phase fires per-story and is distinct from Phase 2.25 (Integration Research): Phase 2.25 fires for stories with external integration signals (third-party tools, APIs); Phase 3.5 fires for any decomposition gap regardless of whether an external integration is involved.
A story qualifies for story-level research if any of the following apply:
When a story qualifies, follow the Research Process defined in Phase 2.25. Record findings in the story spec under a Research Notes section, noting the trigger condition, query summary, source URLs, and key insight for each gap. If research resolves the gap, update the story's done definition or considerations. If research surfaces new risks, flag the story as high-risk for Phase 4 review.
If WebSearch or WebFetch fails or is unavailable, continue without research rather than blocking the workflow. Log: "Story-level research skipped for <story-id>: WebSearch/WebFetch unavailable." and proceed to Phase 4.
If no stories qualify under the trigger conditions above, log: "No stories with decomposition gaps — skipping story-level research." and proceed to Phase 4.
For new stories, create the ticket then immediately write the full story body into the ticket file:
# Assemble the story body from earlier phases and create the ticket in one command:
# - Description: What/Why/Scope from Phase 2 analysis
# - Done Definitions: assembled during Phase 3
# - Considerations: flags from Phase 2 Risk & Scope Scan
# - Escalation Policy: selected in Phase 1 Step 1b (omit if Autonomous)
STORY_ID=$(.claude/scripts/dso ticket create story "As a [persona], [goal]" --parent=<epic-id> --priority=<priority> -d "$(cat <<'DESCRIPTION'
## Description
**What**: <what the feature or change is>
**Why**: <how this advances the epic's vision>
**Scope**:
- IN: <items explicitly in scope>
- OUT: <items explicitly out of scope>
## Done Definitions
- When this story is complete, <observable outcome 1>
← Satisfies: "<quoted epic criterion>"
- When this story is complete, <observable outcome 2>
← Satisfies: "<quoted epic criterion>"
## Considerations
- [<Area>] <concern from Risk & Scope Scan>
## Escalation Policy
**Escalation policy**: <verbatim escalation policy text from Phase 1 Step 1b>
DESCRIPTION
)")
Omit the ## Escalation Policy section if the user selected Autonomous in Phase 1 Step 1b. The ticket must never be left as a bare title — always include the structured body at creation time.
For modified stories, use .claude/scripts/dso ticket comment <existing-id> "<updated content>" to record changes.
For stories to delete:
.claude/scripts/dso ticket transition <id> open closed
Each story must contain:
Format: As a [User/Developer/PO], [goal]
Example: "As a compliance officer, I can see which policies apply to a document"
Include:
Do NOT include: specific file paths, technical implementation details, error codes, or testing requirements. Those belong in /dso:implementation-plan.
Observable outcomes that bridge the epic's vision to task-level acceptance criteria. Each definition must be:
/dso:implementation-plan can decompose it into tasks with specific Verify: commandsFormat:
Done Definitions:
- When this story is complete, [observable outcome 1]
← Satisfies: "[quoted epic criterion]"
- When this story is complete, [observable outcome 2]
← Satisfies: "[quoted epic criterion]"
Example:
Done Definitions:
- When this story is complete, a user can view all extracted rules
for a document, mark individual rules as approved or rejected,
and see a summary count of pending reviews
← Satisfies: "Users can review extracted rules before export"
- When this story is complete, reviewed rules persist across sessions
and are visible when the user returns to the same document
← Satisfies: "Review state is preserved"
Good done definitions (observable outcomes):
Bad done definitions (implementation details):
After drafting all done definitions, cross-check each DD against the epic SC it claims to satisfy (← Satisfies:). A done definition contradicts its SC when the DD's observable outcome, if fully achieved, would leave the SC unsatisfied. Common contradiction patterns:
If a contradiction is found:
"SC '<criterion>' may be too strict for this decomposition — <reason>. Revise the SC, or confirm the current SC should be met as written?"Code-change stories (stories that produce or modify source code) must include 'unit tests written and passing for all new or modified logic' as a Done Definition. This is a unit test DoD requirement applied at the story level.
Documentation, research, and other non-code stories are exempt from this requirement — their Done Definitions focus on observable outcomes rather than test coverage.
Notes from the Risk & Scope Scan (Phase 2). These provide context for /dso:implementation-plan to incorporate into task-level acceptance criteria:
Considerations:
- [Performance] Large file processing — consider timeout behavior
- [Testing] New LLM interaction — ensure mock-compatible interface
- [Accessibility] New interactive page — WCAG 2.1 AA compliance required
Include the policy selected in Phase 1 Step 1b. Use the exact text for each label:
| Label | Text to include verbatim |
|---|---|
| Autonomous | Escalation policy: Proceed with best judgment. Make and document reasonable assumptions. Do not escalate for uncertainty — use your best assessment of the intent and move forward. |
| Escalate when blocked | Escalation policy: Proceed unless a significant assumption is required to continue — one that could send the implementation in the wrong direction. Escalate only when genuinely blocked without a reasonable inference. Document all assumptions made without escalating. |
| Escalate unless confident | Escalation policy: Escalate to the user whenever you do not have high confidence in your understanding of the work, approach, or intent. "High confidence" means clear evidence from the codebase or ticket context — not inference or reasonable assumption. When in doubt, stop and ask rather than guess. |
Omit this section entirely if the user selected Autonomous — the absence of a policy section signals unrestricted autonomy.
Add blocking relationships:
.claude/scripts/dso ticket link <story-id> <blocking-story-id> depends_on
After all implementation stories are drafted, create one final story to update project documentation. This story:
CLAUDE.md (architecture section, quick reference), .claude/design-notes.md, ADRs, KNOWN-ISSUES.md, or other docs that already exist and would become stale after the epic is complete.claude/docs/DOCUMENTATION-GUIDE.md for formatting, structure, and conventions when writing documentation updatesWhen creating the documentation update story via .claude/scripts/dso ticket create, add a note with the guide reference so sub-agents find it in their ticket payload:
.claude/scripts/dso ticket comment <story-id> "Follow .claude/docs/DOCUMENTATION-GUIDE.md for documentation formatting, structure, and conventions."
After all implementation stories are drafted and the documentation update story is planned, evaluate whether the epic requires dedicated TDD test stories. A TDD test story is a story whose sole purpose is to write failing tests (RED) that implementation stories must make pass (GREEN).
Infer the epic type from its context and title:
| Epic Type | TDD Story Required | Story Title Format |
|---|---|---|
| User-facing epic (LLM-inferred: epic adds or changes user-visible features, pages, flows, or interactions) | Yes — create an E2E test story | Write failing E2E tests for [feature] |
| External-API epic (LLM-inferred: epic integrates with an external service or third-party API) | Yes — create an integration test story | Write failing integration tests for [feature] |
| Internal tooling epic (LLM-inferred: epic modifies internal skills, hooks, scripts, or infrastructure) | No — unit testing is handled within each implementation story's /dso:implementation-plan; this is the internal epic exemption |
For epics that span multiple types (e.g., both user-facing and external-API), create one TDD story per applicable type.
TDD test stories have a specific dependency structure that differs from other stories:
depends_on list must contain no implementation story IDs from the same epic — the test story has no blockers and must be created first..claude/scripts/dso ticket link <impl-story-id> <test-story-id> depends_on for each implementation story so that implementation cannot begin until tests exist.Every TDD test story must include the following acceptance criterion:
Tests must be run and confirmed failing (RED) before any implementation story begins.
The failing run result must be recorded in a story note:
.claude/scripts/dso ticket comment <test-story-id> "RED confirmed: <test output summary>"
This RED acceptance criteria ensures the TDD test story's tests are observed to fail before implementation begins, not written alongside or after implementation.
Non-interactive suppress (CP5): If PREPLANNING_INTERACTIVE=false:
Display the epic ID prominently at the top so it can be referenced in follow-up commands:
Story dashboard for Epic [epic-id]: [Title]
Display a summary table:
| ID | Title | Priority | Status | Blocks | Split | Satisfies Criterion |
|---|---|---|---|---|---|---|
| xxx-126 | As a user... | P1 | pending | xxx-127 | Foundation | Epic criterion 1 |
| xxx-127 | As a user... | P2 | pending | - | Enhancement of xxx-126 | Epic criterion 1 |
| xxx-128 | As a dev... | P1 | pending | - | - | Epic criterion 2 |
Then, below the table, display each story's full description so the user can review scope, done definitions, and considerations before approving:
### xxx-126: As a user, I can upload a document and see its classification
**What**: [description]
**Why**: [rationale]
**Scope**: IN: [...] | OUT: [...]
**Done Definitions**:
- When this story is complete, [outcome 1]
← Satisfies: "[epic criterion]"
**Considerations**:
- [Area] concern
---
[repeat for each story]
After creating all stories and dependencies:
.claude/scripts/dso validate-issues.sh
If score < 5, fix issues before presenting to user.
Non-interactive skip (CP6): If PREPLANNING_INTERACTIVE=false:
AskUserQuestion).Present the plan to the user with:
I've created a story map for Epic [ID]: [Title]
Summary:
- [N] new stories created
- [M] existing stories modified
- [K] stories removed
- Walking Skeleton: [list of IDs in critical path]
Next Steps:
1. Review the story dashboard above
2. Confirm priorities and dependencies make sense
Use AskUserQuestion to get user approval:
If the user requests changes, iterate on the plan and re-present. Once the user selects "Approve — finalize and proceed", immediately continue to Step 5a, Step 6, and Step 7 without pausing for additional input — approval is the signal to proceed, not a stopping point.
Write the accumulated context as a structured comment on the epic ticket so that /dso:implementation-plan can load richer context when planning individual stories from this epic, regardless of which session or environment runs next.
Command (use Python subprocess to avoid shell ARG_MAX limits for large payloads). This write is an optional cache — if the ticket CLI call fails, log a warning and continue; do not abort the phase:
import json, subprocess
payload = json.dumps(<context-dict>, separators=(",",":"))
body = "PREPLANNING_CONTEXT: " + payload
result = subprocess.run(
[".claude/scripts/dso", "ticket", "comment", "<epic-id>", body],
check=False
)
if result.returncode != 0:
print("WARNING: Failed to write PREPLANNING_CONTEXT comment to epic ticket — continuing without cache write")
Known limitation: For extremely large epic contexts (unlikely in practice), the actual ARG_MAX constraint boundary is
ticket-comment.sh, which passes the comment body as a shell argument to its internalpython3 -cinvocation. The Python subprocess call in this skill avoids ARG_MAX at the outer shell level, but a body >~500KB could still hit the kernel limit insideticket-comment.sh. A proper fix would write the payload to a temp file and pass the path instead of the body directly. A proper fix would pass the body via a temp file instead of a shell argument. Typical epic contexts are 10–50KB and well within limits.
Serialize the JSON payload to a single minified line (no whitespace between keys/values) and write it as a ticket comment. If /dso:preplanning runs again on the same epic, write a new comment — /dso:implementation-plan will use the last PREPLANNING_CONTEXT: comment in the array.
Schema (version 1):
{
"version": 1,
"epicId": "<epic-id>",
"generatedAt": "<ISO-8601 timestamp>",
"generatedBy": "preplanning",
"epic": {
"title": "...",
"description": "...",
"successCriteria": ["..."]
},
"stories": [
{
"id": "<story-id>",
"title": "...",
"description": "...",
"priority": 2,
"classification": "new|reuse|modify",
"walkingSkeleton": true,
"hasWireframe": false,
"doneDefinitions": ["When this story is complete, ..."],
"considerations": ["[Performance] Large file processing — consider timeout behavior"],
"scopeSplitCandidate": false,
"splitRole": "foundation|enhancement|null",
"splitPairId": "<paired-story-id or null>",
"blockedBy": ["<blocking-id>"],
"satisfiesCriterion": "quoted epic criterion"
}
],
"storyDashboard": {
"totalStories": 5,
"uiStories": 2,
"criticalPath": ["<id-a>", "<id-b>", "<id-c>"]
}
}
Content to include:
generatedAt: Current ISO-8601 timestamp for staleness detectionWrite the context as a ticket comment using .claude/scripts/dso ticket comment. If /dso:preplanning runs again on the same epic, write a new comment — /dso:implementation-plan uses the last PREPLANNING_CONTEXT: comment in the array.
TTL note for consumers: The
generatedAttimestamp enables staleness detection. Consumers should treatPREPLANNING_CONTEXTcomments older than 7 days as potentially stale — epic scope, story priorities, or dependency structures may have changed since generation. When consuming a stale context, re-invoke/dso:preplanningto refresh it rather than relying on outdated data.
Log: "Planning context written to epic ticket <epic-id> as PREPLANNING_CONTEXT comment"
After the user approves the story map, dispatch dso:ui-designer for any
story that involves UI changes. The agent determines whether new components,
layouts, or wireframes are actually needed — your job is only to identify
candidates and dispatch them.
A story is a candidate if it:
Stories that are purely backend, infrastructure, testing-only, or documentation do NOT qualify.
Skip if: No stories in the plan involve UI changes. Document this: "No UI stories identified — skipping wireframe phase."
Before the loop: Read the inline dispatch protocol once using the Read tool:
skills/preplanning/prompts/ui-designer-dispatch-protocol.md
For each qualifying story, follow the six protocol steps in order:
agents/ui-designer.md inline, use subagent_type: "general-purpose" with model: "sonnet" — dso:ui-designer is an agent file identifier, NOT a valid subagent_type value)/dso:review-protocol on design
artifacts; max 3 cycles; REVIEW_PASS → tag design:approved and proceed;
REVIEW_FAIL → re-dispatch ui-designer with feedback; at max cycles:
interactive → ask user; non-interactive → emit INTERACTIVITY_DEFERRED,
tag design:pending_review, and proceed)processedStories and siblingDesigns)NESTING PROHIBITION: Dispatch dso:ui-designer via the Agent tool only.
Do NOT use the Skill tool — that would create illegal Skill-tool nesting
(preplanning → Skill → ui-designer) which causes
[Tool result missing due to internal error] failures.
Parse the agent return value for the UI_DESIGNER_PAYLOAD: prefix and extract
the JSON object that follows. Route all subsequent decisions (tagging, scope
splits, session file updates) based on that object's fields.
Order: Process stories in dependency order (stories with no blockers first, then stories that depend on them). This ensures base wireframes exist before dependent designs reference them.
After wireframe phase completes (or is skipped), confirm all ticket state is up to date and report completion.
When --lightweight is passed:
{
"result": "ESCALATED",
"reason": "<override name>: <explanation>",
"recommendation": "full_preplanning",
"epicId": "<epic-id>"
}
stories array) using Python subprocess to avoid ARG_MAX shell argument limits. This write is an optional cache — if it fails, log a warning and continue; do not abort the phase:
import json, subprocess
payload = json.dumps(<context-dict>, separators=(",",":"))
body = "PREPLANNING_CONTEXT_LIGHTWEIGHT: " + payload
result = subprocess.run(
[".claude/scripts/dso", "ticket", "comment", "<epic-id>", body],
check=False
)
if result.returncode != 0:
print("WARNING: Failed to write PREPLANNING_CONTEXT_LIGHTWEIGHT comment to epic ticket — continuing without cache write")
PREPLANNING_CONTEXT_LIGHTWEIGHT: key to avoid overwriting a full PREPLANNING_CONTEXT: comment. Consumers (e.g., /dso:implementation-plan) read PREPLANNING_CONTEXT: by default and only fall back to PREPLANNING_CONTEXT_LIGHTWEIGHT: if no full context exists.
{
"result": "ENRICHED",
"epicId": "<epic-id>",
"doneDefinitions": ["<list of done definitions written>"],
"considerations": ["<list of considerations>"]
}
Never run .claude/scripts/dso ticket link <epic-id> <story-id> depends_on — this adds the story as a dependency of the epic, causing the epic to self-block in sprint-list-epics.sh (bug w21-3w8y).
.claude/scripts/dso ticket link <story-id> <blocking-story-id> depends_on — correct: story depends on another story.claude/scripts/dso ticket link <epic-id> <child-story-id> depends_on — WRONG: child added as epic blockerEpic children are linked via --parent=<epic-id> at creation time. That parent field is how the epic knows what work to do. Adding a child as a dep means the epic will show as BLOCKED until the child is closed — which is backwards. Only add external dependencies (tickets from other epics/projects) to an epic's deps.
Focus on requirements, constraints, and outcomes. Avoid dictating specific implementation code or library choices unless mandated by the Architecture Board.
Good: "System must validate email format before storing"
Bad: "Use the email-validator library with pattern ^[\w.-]+@[\w.-]+\.\w+$"
Check for existing items before creating new ones to prevent backlog pollution. Always run Phase 1 reconciliation before creating stories.
Stories should be detailed enough that /dso:implementation-plan can decompose them without further human clarification. Include:
Do NOT include: file paths, code snippets, database schemas, API response formats, or testing strategies. Those are /dso:implementation-plan concerns.
After writing the Scope section for each story, verify every "OUT" assertion that claims something already exists or is handled elsewhere:
Verify: command that confirms the assertionOUT: [item] — Verified: [command] returned exit 0Why this matters: False preconditions encoded as scoping decisions are invisible to downstream validation. A story that says "OUT: Creating X — X already exists" will pass all structural checks even when X does not exist, because no task was created to build it and no AC was written to verify it.
| Phase | Key Actions | Tools |
|---|---|---|
| 1: Reconciliation | Audit children, clarify scope | .claude/scripts/dso ticket show, .claude/scripts/dso ticket deps |
| 2: Risk & Scope Scan | Flag cross-cutting concerns, identify split candidates | Lightweight analysis (no sub-agents) |
| 2.5: Adversarial Review | Red team attack on story map, blue team filter findings (skip if < 3 stories) | Task (opus red team, sonnet blue team) |
| 3: Walking Skeleton | Prioritize critical path, apply INVEST, Foundation/Enhancement splits | Priority analysis, .claude/scripts/dso ticket link |
| 4: Verification | Create stories, link criteria, validate, wireframe UI stories | .claude/scripts/dso ticket create, .claude/scripts/dso ticket link, .claude/scripts/dso ticket comment, validate-issues.sh, dso:ui-designer (via Agent tool), .claude/scripts/dso ticket edit --tags (design:approved on REVIEW_PASS; design:pending_review on deferred/failed review) |
Epic: "Implement document classification pipeline" Epic Criterion: "Users can upload a document and see its classification"
Existing Child: "Add database schema for documents" (status: pending)
Reconciliation:
Risk & Scope Scan:
New Stories (vertical slices):
Story 1 (Foundation): "As a user, I can upload a document and see its classification"
Story 2 (Enhancement of Story 1): "As a user, I can see detailed classification confidence and sub-categories"
Before emitting final output or STATUS, emit the SKILL_EXIT breadcrumb:
_DSO_TRACE_END_MS=$(date +%s%3N 2>/dev/null || echo "null")
_DSO_TRACE_ELAPSED="null"
if [ "${_DSO_TRACE_START_MS}" != "null" ] && [ "${_DSO_TRACE_END_MS}" != "null" ]; then
_DSO_TRACE_ELAPSED=$(( _DSO_TRACE_END_MS - _DSO_TRACE_START_MS ))
fi
echo "{\"type\":\"SKILL_EXIT\",\"timestamp\":\"$(date -u +%Y-%m-%dT%H:%M:%SZ 2>/dev/null)\",\"skill_name\":\"preplanning\",\"nesting_depth\":${_DSO_TRACE_DEPTH:-1},\"skill_file_size\":${_DSO_TRACE_FILE_SIZE:-null},\"tool_call_count\":null,\"elapsed_ms\":${_DSO_TRACE_ELAPSED},\"session_ordinal\":${_DSO_TRACE_SESSION_ORDINAL:-1},\"cumulative_bytes\":${_DSO_TRACE_CUMULATIVE_BYTES:-null},\"termination_directive\":false,\"user_interaction_count\":0}" >> "/tmp/dso-skill-trace-${_DSO_TRACE_SESSION_ID}.log" || true