Use when consolidating discovery findings into an evidence-based report with methodology, key evidence, insights by theme, personas/JTBD, testable hypotheses, and remaining uncertainties. Also triggers on 'discovery report', 'research findings', 'what did we learn', 'evidence summary', 'discovery synthesis', or 'sumário de evidências'.
From pmnpx claudepluginhub etusdigital/etus-plugins --plugin pmThis skill uses the workspace's default tool permissions.
knowledge/template.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides implementation of event-driven hooks in Claude Code plugins using prompt-based validation and bash commands for PreToolUse, Stop, and session events.
Consolidate what was learned during discovery (qualitative and quantitative) and transform scattered information into actionable insights. This document exists so that product decisions do not depend on memory, opinion, or meetings — and to sustain the OST and prioritization with facts.
This is DIFFERENT from project-context. Project-context captures the 5W2H (what are we building). Discovery Report captures the EVIDENCE gathered during research (what did we learn). It does not close scope (that is the PRD) and does not prioritize (that is the prioritization document).
Principle: Evidence before decision.
Reference: .claude/skills/orchestrator/dependency-graph.yaml
BLOCKS (must exist — auto-invoke if missing):
docs/ets/projects/{project-slug}/discovery/baseline.md — Evidence references baseline metrics; context for the "before" picture.docs/ets/projects/{project-slug}/discovery/project-context.md — Scope context needed to know what initiative the discovery covers.ENRICHES (improves output — warn if missing):
docs/ets/projects/{project-slug}/discovery/product-vision.md — Vision helps focus the research; BO-# objectives inform which evidence matters most.Resolution protocol:
dependency-graph.yaml → discovery-report.requires: [baseline, project-context]docs/ets/projects/{project-slug}/discovery/baseline.md and docs/ets/projects/{project-slug}/discovery/project-context.md exist, are non-empty, and are not <!-- STATUS: DRAFT -->?baseline skill → wait → continueproject-context skill → wait → continuePosition in workflow: After baseline, before product-vision OR in parallel with product-vision. The discovery report and product-vision inform each other — evidence shapes vision, vision focuses research.
MANDATORY: This skill MUST write its artifact to disk before declaring complete.
mkdir -p if neededIf the Write fails: Report the error to the user. Do NOT proceed to the next skill.
This skill follows the ETUS interaction standard. Your role is a thinking partner, not an interviewer — challenge weak evidence, push for specificity, help distinguish facts from hypotheses from interpretations, and flag when insights lack supporting data. Discovery report work is about synthesis and honesty — these patterns ensure the user builds an evidence-based narrative, not a dump of interviews or a wishful interpretation.
One question per message — Ask one question, wait for the answer, then ask the next. Discovery questions often require the user to revisit notes and data, so give them space. Use the AskUserQuestion tool when available for structured choices.
3-4 suggestions for choices — When the user needs to choose a direction (e.g., how to group themes, which personas to highlight, how to frame hypotheses), present 3-4 concrete options with a brief description of each. Highlight your recommendation. Let the user pick before proceeding.
Propose approaches before generating — Before generating any content section, propose 2-3 approaches with tradeoffs. Example: "I see three ways to organize the insights: (A) by user journey stage — maps findings to funnel steps, (B) by persona — groups findings by who is affected, (C) by theme — clusters related findings regardless of stage or persona. I recommend C because it best surfaces cross-cutting patterns."
Present output section-by-section — Don't generate the full document at once. Present each major section (e.g., Objective, then Method, then Evidence, then Insights by Theme, etc.), ask "Does this capture it well? Anything to adjust?" and only proceed after approval.
Track outstanding questions — If something can't be answered now, classify it:
Multiple handoff options — At completion, present 3-4 next steps as options instead of a single fixed path.
Resume existing work — Before starting, check if the target artifact already exists at the expected path. If it does, ask the user: "I found an existing discovery-report.md at [path]. Should I continue from where it left off, or start fresh?" If resuming, read the document, summarize the current state, and continue from outstanding gaps.
Assess if full process is needed (right-size check) — If the user's input already has well-synthesized evidence with sources, clear insights by theme, and testable hypotheses, don't force the full interview. Confirm understanding briefly and offer to skip directly to document generation. Only run the full interactive process when there's genuine ambiguity or missing synthesis.
Thinking partner behaviors:
This skill reads and writes persistent memory to maintain context across sessions.
On start (before any interaction):
docs/ets/.memory/project-state.md — know where the project isdocs/ets/.memory/decisions.md — don't re-question closed decisionsdocs/ets/.memory/preferences.md — apply user/team preferences silentlydocs/ets/.memory/patterns.md — apply discovered patternsOn finish (after saving artifact, before CLOSING SUMMARY):
project-state.md is updated automatically by the PostToolUse hook — do NOT edit it manually.python3 .claude/hooks/memory-write.py decision "<decision>" "<rationale>" "<this-skill-name>" "<phase>" "<tag1,tag2>"python3 .claude/hooks/memory-write.py preference "<preference>" "<this-skill-name>" "<category>"python3 .claude/hooks/memory-write.py pattern "<pattern>" "<this-skill-name>" "<applies_to>"The .memory/*.md files are read-only views generated automatically from memory.db. Never edit them directly.
Before generating content, challenge the evidence quality with these questions (ask the most relevant 1-2, not all):
The goal is to sharpen the report's quality, not to block progress. If the evidence is solid, acknowledge it and move on quickly.
These anti-patterns come from the team's real experience (KB). Flag them actively during the interview and generation:
Interview dump without synthesis — The report is not a transcript. If sections read like raw interview notes without pattern extraction, push back: "Can we identify the pattern across these responses?"
Writing like a PRD — Discovery reports do not close scope or define requirements. If sections describe "what we will build," redirect: "This is a decision, not a finding. Let's capture what we learned, and decisions go to the PRD."
Premature prioritization — The discovery report does not rank or sequence items. If the user tries to assign P0/P1/P2 here, remind them: "Prioritization is the next document. Here we document all findings without ranking."
Vague terms without measurement — "Significant improvement," "many users complained," "better experience" — always push for specifics: how many, what metric, compared to what baseline.
Confusing fact, hypothesis, and interpretation — Every claim should be labeled: observed fact (data), hypothesis (testable claim), or interpretation (inference from limited data). Mix-ups erode trust in the report.
Load context in this order of priority:
[context-path], read that file directly.docs/ets/projects/{project-slug}/state/reports/ for upstream discovery artifacts.docs/ets/projects/{project-slug}/discovery/ for existing baseline.md, project-context.md, product-vision.md. Load baseline metrics and scope context.This interview follows a one-question-at-a-time rhythm. Ask each question alone in one message, wait for the user's answer, then decide whether to ask a follow-up or move forward.
Question 1 (ask alone, one message):
"Qual o objetivo deste discovery? Qual decisao ele pretende habilitar? Quais riscos queremos reduzir?"
Wait for the answer. Extract: decision to be enabled, risks to be reduced.
Question 2 (ask alone, one message):
"Quais perguntas de pesquisa queriamos responder? Liste como Q1, Q2, Q3..."
Wait for the answer. If fewer than 3 questions, probe: "Ha mais alguma pergunta que o time queria responder?"
Question 3 (ask alone, one message):
"Qual metodo foi usado? Precisamos cobrir: (A) Fontes qualitativas — entrevistas (quantas, perfis, duracao, script), shadowing, suporte; (B) Fontes quantitativas — dados analisados, periodo, segmentacoes; (C) Outras fontes — tickets/CS, incidentes/logs, documentacao tecnica."
Wait for the answer. For each source, immediately probe:
Question 4 (ask alone, one message):
"Quais foram as principais evidencias encontradas? Liste 5-10 items, cada um com a evidencia concreta (dado, quote, print) e a fonte/link."
Wait for the answer. For each evidence item, classify: is it a fact, a pattern, or an outlier?
Question 5 (ask alone, one message):
"Agrupe os achados por tema. Para cada tema: qual o insight, qual a evidencia que o sustenta, qual a causa provavel, e qual a implicacao para produto?"
Wait for the answer. For each theme:
Question 6 (ask alone, one message):
"Identificamos personas ou JTBD? Se sim, para cada uma: qual o job principal, qual a dor principal, e qual o criterio de sucesso?"
Wait for the answer. This block is optional — if no personas emerged, note it and move on.
Question 7 (ask alone, one message):
"Quais hipoteses emergiram? Use o formato: 'Se [acao], entao [resultado esperado], medido por [metrica], de [[baseline]] para [[meta]].'"
Wait for the answer. For each hypothesis:
Question 8 (ask alone, one message):
"O que ainda e incerto? Classifique: Tier 1 = bloqueia decisao (precisa de responsavel e data), Tier 2 = nao bloqueia mas melhora qualidade."
Wait for the answer. For each Tier 1 uncertainty, ensure there's a responsible person and date.
The generated docs/ets/projects/{project-slug}/discovery/discovery-report.md contains:
SST Rule: Discovery evidence, method/sample, and insights by theme ONLY in this document. No other document should redefine the evidence base or insight synthesis.
H-# Pattern: Hypotheses. Format: H-1, H-2, H-3. Each H must:
Downstream traceability: H-# → O-# opportunities in ost.md (insights become opportunities)
knowledge/template.md for the discovery report document template and structure.baseline.md (BLOCKS):
project-context.md (BLOCKS):
## WHAT, ## WHO, ## WHY sections (or equivalent)Before marking this document as COMPLETE:
If any check fails → mark document as DRAFT with <!-- STATUS: DRAFT --> at top.
After saving and validating, display:
discovery-report.md saved to `docs/ets/projects/{project-slug}/discovery/discovery-report.md`
Status: [COMPLETE | DRAFT]
Research questions: [count]
Key evidence items: [count]
Themes with insights: [count]
Hypotheses (H-#): [count]
Tier 1 uncertainties: [count] | Tier 2: [count]
Then present these options using AskUserQuestion (or as a numbered list if AskUserQuestion is unavailable):
Wait for the user to choose before taking any action. Do not auto-proceed to the next skill.
baseline.md (BLOCKS), project-context.md (BLOCKS), optionally product-vision.md (ENRICHES)Input: Interview notes from Step 3
Action: Generate the document one major section at a time, using the template from knowledge/template.md. For each section:
Section order:
Output: Approved sections assembled into complete discovery-report.md
docs/ets/projects/{project-slug}/discovery/ — create if missingdocs/ets/projects/{project-slug}/discovery/discovery-report.md using the Write tooldocs/ets/projects/{project-slug}/discovery/discovery-report.md) + paths to upstream documents (BLOCKS: docs/ets/projects/{project-slug}/discovery/baseline.md, docs/ets/projects/{project-slug}/discovery/project-context.md)"Document saved to
docs/ets/projects/{project-slug}/discovery/discovery-report.md. The spec reviewer approved it. Please review and let me know if you want any changes before we proceed." Wait for the user's response. If they request changes, make them and re-run the spec review. Only proceed to validation after user approval.
| Error | Severity | Recovery | Fallback |
|---|---|---|---|
| BLOCKS dep missing (baseline.md) | Critical | Auto-invoke baseline skill — evidence needs baseline metrics | Pause until baseline is available |
| BLOCKS dep missing (project-context.md) | Critical | Auto-invoke project-context skill — scope context is needed | Pause until project-context is available |
| BLOCKS dep is DRAFT | Warning | Proceed with available context, noting evidence may be weaker | Add <!-- ENRICHMENT_MISSING: [doc] is DRAFT --> |
| ENRICHES dep missing (product-vision.md) | Low | Proceed — report can be built without vision targets | Note that report may need revision after vision is defined |
| User cannot provide evidence for a theme | Medium | Mark theme as [[evidencia a coletar]] with collection plan | Add to Tier 1 gaps if critical, Tier 2 if not |
| All evidence is qualitative only | High | Escalate: "All evidence is qualitative — consider adding quantitative data before proceeding to OST" | Mark as DRAFT, add data collection to recommendations |
| Hypotheses lack measurable targets | High | Push for baseline → target format | Mark hypotheses as <!-- INCOMPLETE: needs baseline/target --> |
| Output validation fails | High | Address gaps, re-present sections to user | Mark as DRAFT |
This skill supports iterative quality improvement when invoked by the orchestrator or user.
| Condition | Action | Document Status |
|---|---|---|
| Completeness >= 90% | Exit loop | COMPLETE |
| Improvement < 5% between iterations | Exit loop (diminishing returns) | DRAFT + notes |
| Max 3 iterations reached | Exit loop | DRAFT + iteration log |
--quality-loop on any skill invocation--no-quality-loop to disable (generates once, validates once)