Use when conducting research, evaluating feasibility, or exploring options before committing to a direction. Also triggers on 'spike', 'research', 'investigate', 'feasibility', 'POC', 'proof of concept', 'brainstorm', 'explore options', or 'compare alternatives'.
From pmnpx claudepluginhub etusdigital/etus-plugins --plugin pmThis skill uses the workspace's default tool permissions.
knowledge/template.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides implementation of event-driven hooks in Claude Code plugins using prompt-based validation and bash commands for PreToolUse, Stop, and session events.
Reference: .claude/skills/orchestrator/dependency-graph.yaml
BLOCKS (required — auto-invoke if missing):
docs/ets/projects/{project-slug}/discovery/opportunity-pack.md — Spikes now start from an
explicit question-framing package so the research is grounded in actors,
hypotheses, use cases, and open questions.ENRICHES (improves output — use if available):
docs/ets/projects/{project-slug}/state/coverage-matrix.yaml — Helps verify that the question
was framed with enough coverage before research begins.docs/ets/projects/{project-slug}/discovery/project-context.md — Business context may inform which options are viable.docs/ets/projects/{project-slug}/discovery/product-vision.md — Vision alignment helps evaluate options against strategic goals.Resolution protocol:
Use full version when:
Use short version when:
Why this matters: Research findings are valuable beyond the immediate decision. A documented spike saves future teams from repeating the same investigation and captures the reasoning behind strategic choices.
mkdir -p if neededdocs/ets/projects/{project-slug}/spikes/spike-{slug}.mdIf the Write fails: Report the error to the user and do not proceed.
This skill follows the ETUS interaction standard. Your role is a thinking partner, not an interviewer — suggest alternatives, challenge assumptions, and explore what-ifs instead of only extracting information.
One question per message — Ask one question, wait for the answer, then ask the next. Research questions benefit from deliberate exploration. Use the AskUserQuestion tool when available for structured choices.
3-4 suggestions for choices — When the user needs to choose a research methodology, brainstorm technique, or direction to explore, present 3-4 concrete options with descriptions. Highlight your recommendation.
Propose approaches before generating — Before starting research, propose 2-3 research methodologies with tradeoffs. For example: "desk research" vs. "prototyping" vs. "expert interview" vs. "competitive analysis."
Present output section-by-section — Present findings, options, and recommendation individually. Ask "Does this capture the key finding?" and only proceed after approval.
Track outstanding questions — If research reveals new questions:
Multiple handoff options — At completion, present 3-4 next steps as options (see CLOSING SUMMARY).
Resume existing work — Before starting, check if the target artifact already exists at the expected path. If it does, ask the user: "I found an existing spike document at [path]. Should I continue from where it left off, or start fresh?" If resuming, read the document, summarize the current state, and continue from outstanding gaps.
Assess if full process is needed — If the user's input is already detailed with clear requirements, specific acceptance criteria, and defined scope, don't force the full interview. Confirm understanding briefly and offer to skip directly to document generation. Only run the full interactive process when there's genuine ambiguity to resolve.
This skill reads and writes persistent memory to maintain context across sessions.
On start (before any interaction):
docs/ets/.memory/project-state.md — know where the project isdocs/ets/.memory/decisions.md — don't re-question closed decisionsdocs/ets/.memory/preferences.md — apply user/team preferences silentlydocs/ets/.memory/patterns.md — apply discovered patternsOn finish (after saving artifact, before CLOSING SUMMARY):
project-state.md is updated automatically by the PostToolUse hook — do NOT edit it manually.python3 .claude/hooks/memory-write.py decision "<decision>" "<rationale>" "<this-skill-name>" "<phase>" "<tag1,tag2>"python3 .claude/hooks/memory-write.py preference "<preference>" "<this-skill-name>" "<category>"python3 .claude/hooks/memory-write.py pattern "<pattern>" "<this-skill-name>" "<applies_to>"The .memory/*.md files are read-only views generated automatically from memory.db. Never edit them directly.
Document a research investigation, feasibility study, or brainstorm. Spikes still answer the question "Should we?" before committing to "How do we?", but they now start from a question-framing ideation package rather than ad-hoc conversation alone.
Spikes are the entry point for Spike/Research mode. They can feed into any other mode:
Load context in this order of priority:
docs/ets/projects/{project-slug}/discovery/opportunity-pack.md
first.docs/ets/projects/{project-slug}/state/coverage-matrix.yaml if
it exists.[research question], use it as
additional context.This interview is 4 core questions, asked one at a time. The goal is to scope the research before diving in.
"What question are you trying to answer? Be as specific as possible."
Follow-up probes (ask one at a time only if needed):
"What's the context? Why does this matter now — what triggered this investigation?"
Follow-up probes:
"What would a good answer look like? What level of confidence do you need — a quick directional signal, or a thorough analysis?"
Present research depth options:
"I see three levels of research depth:
- Quick scan (~30 min) — High-level comparison, enough for a directional decision
- Thorough analysis (~2 hours) — Detailed comparison with evidence, sufficient for a confident recommendation
- Deep investigation (~4+ hours) — Exhaustive research with prototyping or experimentation
Based on your timeline, I'd suggest [level]. Which fits your needs?"
"Are there any constraints — budget, technology, team skills, timeline — that would rule out certain options upfront?"
Follow-up probes:
When the research question benefits from creative exploration, offer BMAD CIS techniques. This is the same brainstorm toolkit available in the product-vision skill.
Technique Selection:
Present options tailored to the research context:
"This research question could benefit from a structured brainstorming technique. Here are some options:
SCAMPER (~15 min) — Apply 7 creative prompts (Substitute, Combine, Adapt, Modify, Put to use, Eliminate, Reverse) to explore alternatives. Best for: finding creative solutions to a known problem.
Reverse Brainstorming (~15 min) — 'How could we make this WORSE?' Then invert each answer. Best for: identifying risks and failure modes in proposed solutions.
Six Thinking Hats (~20 min) — Explore from 6 perspectives (Facts, Emotions, Pessimism, Optimism, Creativity, Process). Best for: evaluating options when stakeholders have different concerns.
5 Whys (~10 min) — Ask 'why' 5 times to drill down to root causes. Best for: validating whether you're researching the right question.
I recommend [technique] because [reason specific to their question]. Want to try one, or jump straight to evaluating options?"
Execution:
Run the selected technique interactively, one prompt at a time. See .claude/skills/discovery/product-vision/knowledge/brainstorm-techniques.md for full catalog and step-by-step execution guides.
Synthesis:
After completing the technique, synthesize outputs into key insights and ask: "What surprised you? What's the most valuable insight?"
Honor the user's choice — brainstorming enriches the spike but is not required.
After scoping, propose 2-3 research methodologies:
"For this question, I'd recommend these research approaches:
A. Desk research — Analyze existing documentation, competitor products, industry benchmarks B. Prototyping/POC — Build a minimal proof of concept to test feasibility C. Expert consultation — Leverage domain expertise (yours, team's, or external)
I recommend [A/B/C] because [reason]. Which approach do you want to take?"
For each option discovered during research, document:
Present options in a structured comparison, then propose a recommendation with rationale.
The generated docs/ets/projects/{project-slug}/spikes/spike-{slug}.md follows the template in knowledge/template.md.
knowledge/template.md for the spike document template and standard structure..claude/skills/discovery/product-vision/knowledge/brainstorm-techniques.md for the complete BMAD CIS technique catalog (shared with product-vision skill).Before marking this document as COMPLETE:
If any check fails → mark document as DRAFT with <!-- STATUS: DRAFT --> at top.
After saving and validating, display the summary and offer multiple next steps:
spike-{slug}.md saved to `docs/ets/projects/{project-slug}/spikes/spike-{slug}.md`
Status: [COMPLETE | DRAFT]
Research question: [the question]
Recommendation: [recommended option or "needs more research"]
Decision: [Made / Pending]
What would you like to do next?
1. Create a Feature Brief — Turn this recommendation into a feature spec
2. Start a Product — This is big enough for full Product mode with /orchestrator
3. Share findings — Review the spike with stakeholders
4. Run another spike — Investigate a follow-up question
5. Pause for now — Save and return later
Wait for the user's choice before proceeding. Do not auto-advance.
$ARGUMENTS or user description + ENRICHES documents (if available)docs/ets/projects/{project-slug}/spikes/ — create if missingdocs/ets/projects/{project-slug}/spikes/spike-{slug}.md using the Write tooldocs/ets/projects/{project-slug}/spikes/spike-{slug}.md) + paths to upstream documents (none — spikes have no BLOCKS dependencies)"Document saved to
docs/ets/projects/{project-slug}/spikes/spike-{slug}.md. The spec reviewer approved it. Please review and let me know if you want any changes before we proceed." Wait for the user's response. If they request changes, make them and re-run the spec review. Only proceed to validation after user approval.
| Error | Severity | Recovery | Fallback |
|---|---|---|---|
| Research question too broad | Medium | Ask user to narrow down — "What specific decision are you trying to make?" | Proceed with broader question, note limitation |
| No viable options found | Medium | Document why all options were rejected, suggest new research direction | Mark as DRAFT with "inconclusive" status |
| User can't decide between options | Low | Suggest criteria-based scoring or time-boxed decision | Document as "decision pending" with what's needed |
| BMAD technique doesn't yield insights | Low | Note "technique completed, no breakthrough insights" and move on | Proceed to option evaluation |
| Output validation fails | High | Mark as DRAFT, flag gaps | Proceed with DRAFT status |
This skill supports iterative quality improvement when invoked by the orchestrator or user.
| Condition | Action | Document Status |
|---|---|---|
| Completeness >= 90% | Exit loop | COMPLETE |
| Improvement < 5% between iterations | Exit loop (diminishing returns) | DRAFT + notes |
| Max 3 iterations reached | Exit loop | DRAFT + iteration log |