Use when structuring discovery findings into an opportunity tree that connects outcomes to opportunities to candidate solutions. Also triggers on 'OST', 'opportunity solution tree', 'opportunity tree', 'where should we focus', 'which opportunities', or 'structure the discovery'.
From pmnpx claudepluginhub etusdigital/etus-plugins --plugin pmThis skill uses the workspace's default tool permissions.
knowledge/template.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides implementation of event-driven hooks in Claude Code plugins using prompt-based validation and bash commands for PreToolUse, Stop, and session events.
Structure discovery findings into a clear opportunity tree: Outcome (measurable result) → Opportunities (real problems/desires supported by evidence) → Candidate Solutions (high-level directions, not specs) → Assumptions and optional Experiments. The OST reduces the risk of "jumping to features" by forcing structured reasoning: first choose WHERE to attack, then choose HOW. It bridges Discovery and Planning — opportunities selected here become features in the PRD.
Reference: .claude/skills/orchestrator/dependency-graph.yaml
BLOCKS (must exist — auto-invoke if missing):
docs/ets/projects/{project-slug}/discovery/product-vision.md — Outcome comes from BO-# business objectives.docs/ets/projects/{project-slug}/discovery/baseline.md — Evidence for opportunities comes from baseline metrics.ENRICHES (improves output — warn if missing):
docs/ets/projects/{project-slug}/discovery/project-context.md — Qualitative context improves opportunity framing.Resolution protocol:
dependency-graph.yaml → ost.requires: [product-vision, baseline]docs/ets/projects/{project-slug}/discovery/product-vision.md and docs/ets/projects/{project-slug}/discovery/baseline.md exist, are non-empty, and are not <!-- STATUS: DRAFT -->?product-vision skill → wait → continuebaseline skill → wait → continueMANDATORY: This skill MUST write its artifact to disk before declaring complete.
mkdir -p if neededIf the Write fails: Report the error to the user. Do NOT proceed to the next skill.
This skill follows the ETUS interaction standard. Your role is a thinking partner, not an interviewer — challenge opportunity framing, push for evidence, flag when solutions are disguised as opportunities, and help the user distinguish outcome from output. OST work is about structured reasoning under uncertainty — these patterns ensure the user builds an honest, evidence-backed tree, not a wish list.
One question per message — Ask one question, wait for the answer, then ask the next. OST questions require the user to synthesize discovery findings, so give them space. Use the AskUserQuestion tool when available for structured choices.
3-4 suggestions for choices — When the user needs to choose a direction (e.g., outcome framing, opportunity prioritization, experiment design), present 3-4 concrete options with a brief description of each. Highlight your recommendation. Let the user pick before proceeding.
Propose approaches before generating — Before generating any content section, propose 2-3 approaches with tradeoffs. Example: "I see three ways to frame this outcome: (A) conversion-focused — increase funnel completion rate, (B) efficiency-focused — reduce time/cost per conversion, (C) quality-focused — improve lead quality while maintaining volume. I recommend A because it directly ties to the baseline's biggest drop-off."
Present output section-by-section — Don't generate the full document at once. Present each major section (Outcome, then Opportunity list, then each Opportunity detail one by one), ask "Does this capture it well? Anything to adjust?" and only proceed after approval.
Track outstanding questions — If something can't be answered now, classify it:
Multiple handoff options — At completion, present 3-4 next steps as options instead of a single fixed path.
Resume existing work — Before starting, check if the target artifact already exists at the expected path. If it does, ask the user: "I found an existing ost.md at [path]. Should I continue from where it left off, or start fresh?" If resuming, read the document, summarize the current state, and continue from outstanding gaps.
Assess if full process is needed (right-size check) — If the user's input already has a clear outcome, well-evidenced opportunities, and candidate solutions, don't force the full interview. Confirm understanding briefly and offer to skip directly to document generation. Only run the full interactive process when there's genuine ambiguity to resolve.
Thinking partner behaviors:
This skill reads and writes persistent memory to maintain context across sessions.
On start (before any interaction):
docs/ets/.memory/project-state.md — know where the project isdocs/ets/.memory/decisions.md — don't re-question closed decisionsdocs/ets/.memory/preferences.md — apply user/team preferences silentlydocs/ets/.memory/patterns.md — apply discovered patternsOn finish (after saving artifact, before CLOSING SUMMARY):
project-state.md is updated automatically by the PostToolUse hook — do NOT edit it manually.python3 .claude/hooks/memory-write.py decision "<decision>" "<rationale>" "<this-skill-name>" "<phase>" "<tag1,tag2>"python3 .claude/hooks/memory-write.py preference "<preference>" "<this-skill-name>" "<category>"python3 .claude/hooks/memory-write.py pattern "<pattern>" "<this-skill-name>" "<applies_to>"The .memory/*.md files are read-only views generated automatically from memory.db. Never edit them directly.
Before generating content, challenge the framing with these questions (ask the most relevant 1-2, not all):
The goal is to sharpen the tree's quality, not to block progress. If the framing is solid, acknowledge it and move on quickly.
These anti-patterns come from the team's real experience (KB). Flag them actively during the interview and generation:
Backlog disguised as OST — The OST is not a list of tickets or user stories. If opportunities read like "implement feature X" or have acceptance criteria, push back: "This looks like a backlog item. What's the underlying problem it solves?"
Opportunities written as solutions — Opportunities must describe problems or desires, not actions. Bad: "Build a new onboarding flow." Good: "New users abandon before completing first action (68% drop-off at step 3)."
Everything is P0 — If all opportunities are marked critical, the tree is too big or prioritization is missing. Ask: "If you could only pursue ONE of these, which would it be?"
Outcome is output — Outcome must be a measurable business result, not a deliverable. Bad: "Launch media library." Good: "Increase content creation rate from X to Y per week."
Vague terms without metrics — "Improve experience" or "make it better" without specifying how to measure improvement. Push for specificity.
No evidence — Opportunities listed without data, quotes, or incident references. Every opportunity must have at least one evidence point.
Load context in this order of priority:
[context-path], read that file directly.docs/ets/projects/{project-slug}/state/reports/ for upstream discovery artifacts.docs/ets/projects/{project-slug}/discovery/ for product-vision.md and baseline.md. Load BO-# objectives and baseline metrics.This interview follows a one-question-at-a-time rhythm. Ask each question alone in one message, wait for the user's answer, then decide whether to ask a follow-up or move forward.
Question 1 (ask alone, one message):
"Qual e o outcome que estamos perseguindo? Lembre: outcome e resultado mensuravel, nao entrega. Exemplo: 'Aumentar taxa de conclusao do funil X de A% para B% mantendo CPL <= Y ate MM/AAAA.'"
Wait for the answer. If the stated outcome is an output, challenge it:
"Isso parece mais uma entrega (output) do que um resultado (outcome). Qual metrica de negocio melhora se essa entrega for bem-sucedida?"
Follow-up (ask alone, one message):
"Quais sao as metricas principais desse outcome? Para cada uma, qual e o baseline atual (do documento de baseline), a meta, e o periodo?"
Then ask:
"Quais guardrails nao podem piorar enquanto perseguimos esse outcome? (Ex.: conversao nao pode cair abaixo de X%, p95 nao pode subir acima de Yms)"
Question 2 (ask alone, one message):
"Quais sao as principais oportunidades/problemas que o discovery revelou? Liste 3-7 em uma linha cada. Lembre: oportunidade e problema/dor/desejo real, nao 'fazer X' (isso e solucao)."
Wait for the answer. Review each item against anti-patterns:
For each opportunity from Block 2, ask one at a time:
Question 3 (ask alone, one message, for each O-#):
"Vamos detalhar O-[#] — [titulo]. Descreva o problema/oportunidade em 3-6 linhas. Para quem isso importa (persona/area)?"
Wait for the answer. Then ask:
Question 4 (ask alone, one message):
"Quais evidencias sustentam O-[#]? Precisamos de pelo menos: (1) um dado com fonte, (2) um relato/quote, (3) um incidente/ticket. O que voce tem?"
Then ask:
Question 5 (ask alone, one message):
"Quais solucoes candidatas voce ve para O-[#]? Liste 2-5 direcoes em alto nivel — sem detalhar requisitos, apenas a direcao."
Then ask:
Question 6 (ask alone, one message):
"Quais suposicoes (assumptions) precisam ser verdadeiras para essas solucoes funcionarem? O que pode invalidar a abordagem?"
Optional — ask only if relevant:
"Alguma solucao de O-[#] precisa de experimento antes de investir? Se sim: qual experimento, como medir, qual criterio de sucesso, e qual duracao?"
Repeat Block 3 for each opportunity.
Question 7 (ask alone, one message):
"Olhando todas as oportunidades juntas: quais sao as mais criticas agora e por que? Quais podem esperar? (Isso nao e a priorizacao oficial — e para sinalizar urgencia relativa.)"
Question 8 (ask alone, one message):
"Quais questoes ainda estao em aberto? O que falta validar antes de ir para o PRD?"
The generated docs/ets/projects/{project-slug}/planning/ost.md contains:
SST Rule: Structured opportunities and candidate solutions ONLY in this document. No other document should redefine the opportunity tree.
O-# Pattern: Opportunities. Format: O-1, O-2, O-3. Each O must:
S-#.# Pattern: Solution candidates under an opportunity. Format: S-1.1, S-1.2, S-2.1. Each S must:
Downstream traceability: O-# → selected S-#.# → PRD-F-# features in prd.md
knowledge/template.md for the OST document template and structure.product-vision.md (BLOCKS):
baseline.md (BLOCKS):
Before marking this document as COMPLETE:
If any check fails → mark document as DRAFT with <!-- STATUS: DRAFT --> at top.
After saving and validating, display:
ost.md saved to `docs/ets/projects/{project-slug}/planning/ost.md`
Status: [COMPLETE | DRAFT]
IDs generated: [list O-# and S-#.# IDs]
Opportunities: [count] | Solutions: [count] | Experiments: [count]
Then present these options using AskUserQuestion (or as a numbered list if AskUserQuestion is unavailable):
Wait for the user to choose before taking any action. Do not auto-proceed to the next skill.
product-vision.md (BLOCKS), baseline.md (BLOCKS), optionally project-context.md (ENRICHES)Input: Interview notes from Step 3
Action: Generate the document one major section at a time, using the template from knowledge/template.md. For each section:
Section order:
Output: Approved sections assembled into complete ost.md
Integration: O-# and S-#.# IDs consumed by prd skill to generate PRD-F-#
docs/ets/projects/{project-slug}/planning/ — create if missingdocs/ets/projects/{project-slug}/planning/ost.md using the Write tooldocs/ets/projects/{project-slug}/planning/ost.md) + paths to upstream documents (BLOCKS: docs/ets/projects/{project-slug}/discovery/product-vision.md, docs/ets/projects/{project-slug}/discovery/baseline.md)"Document saved to
docs/ets/projects/{project-slug}/planning/ost.md. The spec reviewer approved it. Please review and let me know if you want any changes before we proceed." Wait for the user's response. If they request changes, make them and re-run the spec review. Only proceed to validation after user approval.
| Error | Severity | Recovery | Fallback |
|---|---|---|---|
| BLOCKS dep missing (product-vision.md) | Critical | Auto-invoke product-vision skill — outcome needs BO-# objectives | Pause until product-vision is available |
| BLOCKS dep missing (baseline.md) | Critical | Auto-invoke baseline skill — evidence needs baseline metrics | Pause until baseline is available |
| BLOCKS dep is DRAFT | Warning | Proceed with available context, noting evidence may be weaker | Add <!-- ENRICHMENT_MISSING: [doc] is DRAFT --> |
| User lists >7 opportunities | Medium | Help group/prioritize: "Can we cluster related items or defer some?" | Accept up to 10 if user insists, but flag as complex |
| Opportunity has no evidence | High | Push for at least 1 data point: "What from the baseline or discovery supports this?" | Mark opportunity as <!-- EVIDENCE: MISSING --> |
| Outcome is an output | High | Challenge and reframe: "What metric improves if this is delivered?" | Do not proceed until outcome is a measurable result |
| Output validation fails | High | Address gaps, re-present sections to user | Mark as DRAFT |
This skill supports iterative quality improvement when invoked by the orchestrator or user.
| Condition | Action | Document Status |
|---|---|---|
| Completeness >= 90% | Exit loop | COMPLETE |
| Improvement < 5% between iterations | Exit loop (diminishing returns) | DRAFT + notes |
| Max 3 iterations reached | Exit loop | DRAFT + iteration log |
--quality-loop on any skill invocation--no-quality-loop to disable (generates once, validates once)