Activate Scout genie to explore opportunities and surface assumptions.
From genienpx claudepluginhub elmmly/genie-team --plugin genie/discoverRuns full product discovery cycle—from ideation and assumption mapping to experiment design—for a product or feature idea. Includes checkpoints for prioritization.
/discoverInitiates multi-AI discovery research: asks clarifying questions on intensity, focus area, and output format, then runs structured workflow on provided topic.
/discoverRuns full user research cycle for a product or feature: creates 2-4 personas, empathy map, journey map, and summary with key insights and design implications.
/discoverExecutes evidence-based product discovery through customer interviews and assumption testing for a product concept, producing validation docs, lean canvas, interview logs, and wave decisions.
/discoverAssesses project health across architecture, dependencies, test coverage, security, and technical debt; generates scored report with verdict, findings, and recommendations.
/discoverSearches local, skills.sh, and GitHub for tools/capabilities matching a query, displays results table, and installs user-selected ones.
Activate Scout genie to explore opportunities and surface assumptions.
topic - What to discover (optional — pulls from topic queue if omitted)--assumptions - Focus on assumption mapping only--evidence - Focus on evidence gathering only--feasibility - Include Architect feasibility check--fast - Prioritize speed over exhaustive analysis--workshop - Interactive multi-phase discovery workshop with HTML artifactsWhen $ARGUMENTS is empty, scan docs/topics/ for files with status: pending in frontmatter. If found, pick the highest-priority one (by priority field, then oldest created date) and process it as a topic file (see below). If no pending topics exist, report: "No pending topics found. Provide a topic: /discover [topic]"
When $ARGUMENTS is a plain-text topic string, use it directly as the discovery topic.
When $ARGUMENTS is a path to a .md file (e.g., docs/topics/20260227-auth-reliability.md):
title frontmatter field → use as the discovery topiccontext frontmatter field → include as background evidencestatus: doneresult_ref: docs/analysis/YYYYMMDD_discover_{topic}.mdTopic files are loose sketches — ideas for genie team to discover from, not specs. The discovery may lead somewhere the topic author didn't anticipate. Scout should follow the evidence, not the topic's premise. The Opportunity Snapshot is the authoritative output; the topic file is just the door that opened the exploration.
--fast)When $ARGUMENTS contains --fast: Prioritize speed over exhaustive analysis. Use heuristic judgment, skip thorough counter-evidence search, produce concise output. Set reasoning_mode: fast in the Opportunity Snapshot frontmatter.
When $ARGUMENTS does NOT contain --fast (default): Use deep reasoning mode. Follow all Deep Reasoning directives in the Scout agent definition. Set reasoning_mode: deep in the Opportunity Snapshot frontmatter.
Scout - Discovery specialist combining:
READ (automatic):
RECALL (if topic matches past work):
WRITE:
UPDATE:
status: done, add result_ref: {snapshot_path}, add completed: {date}Produces an Opportunity Snapshot containing:
| Command | Purpose |
|---|---|
/discover:assumptions [topic] | Assumption mapping only |
/discover:evidence [topic] | Evidence gathering only |
/discover:feasibility [topic] | Include Architect feasibility |
/discover "user authentication improvements"
> [Scout produces Opportunity Snapshot]
> Saved to docs/analysis/20251203_discover_auth.md
>
> Key findings:
> - Users frustrated with SSO login failures
> - Token expiry too aggressive
> - No refresh token mechanism
>
> Next: /handoff discover shape
/discover:feasibility "real-time notifications"
> [Scout + Architect collaboration]
> Opportunity identified + technical feasibility assessed
After discovery:
/handoff discover shape/discover:feasibility--workshop)MANDATORY: When $ARGUMENTS contains --workshop, you MUST follow the interactive workshop phases below instead of producing a batch Opportunity Snapshot. The final output is identical — an Opportunity Snapshot saved to docs/analysis/ — but the user participates in key discovery decisions along the way.
When $ARGUMENTS does NOT contain --workshop, ignore this entire section and follow the standard batch flow above.
MANDATORY: You MUST produce a viewable HTML artifact. Do NOT describe the landscape in a text table.
Build deep empathy for the problem space — who are the people, what are they trying to accomplish, what exists today, and what forces shape their world. Go beyond surface-level market data to understand the emotional and functional reality of the people involved.
Read the topic from $ARGUMENTS (strip the --workshop flag). Load all context per the standard Context Loading section above.
Research the topic:
WebSearch and WebFetch to gather real context — competitors, market size, trends, customer reviews, forum discussions, user complaints. Look for how users describe frustrations and workarounds in their own language.Read/Grep/Glob to scan the project for patterns, pain points, and usage data.Write an HTML file to the scratchpad directory using the Write tool:
File path: {scratchpad}/workshop/landscape-scan.html
The HTML file MUST be self-contained (inline CSS, no external dependencies) and show a four-section landscape map:
Layout: Clean card-based sections with clear headers. Light background, readable typography. Cards should be at least 250px wide.
Tell the user to open the file: open {scratchpad}/workshop/landscape-scan.html
Use AskUserQuestion: "How does this landscape look?" with options:
If user requests changes → regenerate the HTML with adjustments and tell user to refresh
Output: Locked landscape understanding for the discovery.
MANDATORY: You MUST produce a viewable HTML artifact. Do NOT describe the opportunity tree in a text table.
Organize the landscape findings into an opportunity tree — desired outcomes at the top, opportunities (problem spaces) branching below. Start broad: generate a wide field of opportunities, then help the user converge on the most promising areas.
Read the landscape scan decisions from Phase 1
Identify the user's desired outcomes from the JTBD in Phase 1. Frame as changes in the user's world, not product capabilities.
Go broad: Map a wide set of opportunities (unmet needs, pain points, friction areas, delight gaps) under each outcome. Include non-obvious and adjacent opportunities alongside the obvious ones. Aim for 3-7 opportunities per outcome.
For each opportunity, note the evidence strength from Phase 1 research.
Do NOT include solutions — opportunities are problem statements. If an opportunity sounds like a feature, reframe it as the underlying user need.
Highlight the 3-5 opportunities with the strongest signal (evidence + impact) to guide narrowing.
Write an HTML file to the scratchpad directory using the Write tool:
File path: {scratchpad}/workshop/opportunity-tree.html
The HTML file MUST be self-contained (inline CSS) and show a hierarchical tree visualization:
Tree rendered with CSS indentation and connector lines (pure HTML/CSS, not Mermaid). Expand/collapse via <details> elements for manageable information density.
Tell the user to open the file: open {scratchpad}/workshop/opportunity-tree.html
Use AskUserQuestion: "How does this opportunity tree look?" with options:
If user requests changes → regenerate the HTML with adjustments and tell user to refresh
Output: Locked opportunity tree for the discovery.
MANDATORY: You MUST produce a viewable HTML artifact. Do NOT describe assumptions in a text table.
For the opportunities in the approved tree, surface the assumptions that underpin them and rank by risk. Every opportunity rests on assumptions — this phase makes them explicit so the team knows what to test first. Organize assumptions into four product risk categories to ensure discovery doesn't fixate on one dimension.
For each opportunity in the approved tree, identify assumptions across all four product risk types:
Assess each assumption's evidence level: Strong, Moderate, Weak, Missing.
Assess each assumption's impact if wrong: High, Medium, Low.
Rank by risk priority: impact_if_wrong * inverse_evidence_level.
If any of the four risk categories is empty, note why (e.g., "No viability assumptions surfaced — is the business model out of scope?").
Write an HTML file to the scratchpad directory using the Write tool:
File path: {scratchpad}/workshop/assumption-matrix.html
The HTML file MUST be self-contained (inline CSS) and show a two-axis risk matrix:
Below the matrix: an ordered Priority List of assumptions ranked by risk score, with the top-left-quadrant items first.
Tell the user to open the file: open {scratchpad}/workshop/assumption-matrix.html
Use AskUserQuestion: "How does this assumption matrix look?" with options:
If user requests changes → regenerate the HTML with adjustments and tell user to refresh
Output: Locked assumption matrix for the discovery.
MANDATORY: You MUST produce a viewable HTML artifact. Do NOT describe the evidence plan in a text table.
For the highest-risk assumptions ("Test First" zone), design the cheapest, fastest experiments to learn. Bias toward experiments that produce real evidence quickly — the fastest path to evidence wins. Match discovery techniques to risk type.
Take the top 3-5 assumptions from the "Test First" zone.
For each, define what "validated" looks like and what "invalidated" looks like.
Design 2-3 evidence-gathering approaches per assumption, matched to risk type:
Estimate effort per approach: Quick (hours), Medium (days), Extended (weeks). If an Extended approach exists alongside a Quick one, flag the Quick option as recommended.
For each assumption, define a success metric — what quantitative or qualitative signal would change the team's confidence level.
Write an HTML file to the scratchpad directory using the Write tool:
File path: {scratchpad}/workshop/evidence-plan.html
The HTML file MUST be self-contained (inline CSS) and show a card-per-assumption layout:
Tell the user to open the file: open {scratchpad}/workshop/evidence-plan.html
Use AskUserQuestion: "How does this evidence plan look?" with options:
If user requests changes → regenerate the HTML with adjustments and tell user to refresh
Output: Locked evidence plan for the discovery.
Produce the standard Opportunity Snapshot incorporating all decisions from Phases 1-4. This snapshot is the beginning of an ongoing discovery practice, not a finished research report.
docs/analysis/YYYYMMDD_discover_{topic}.md using the Opportunity Snapshot template from agents/scout.md./discover --workshop when the landscape shifts significantly or new opportunities emerge from experiments."Output: Standard Opportunity Snapshot — identical format to batch /discover. Downstream /define consumes it the same way.
ARGUMENTS: $ARGUMENTS