Find what you don't know you don't know — surface hidden assumptions, failure modes, and missing perspectives
From spellnpx claudepluginhub smileynet/sams-genai-spells --plugin spellSystematic blind spot detection. Takes a topic — a plan, decision, technology, or research question — and probes it from five angles to surface hidden assumptions, unexamined failure modes, missing perspectives, and cross-domain insights. Produces a structured analysis with honest coverage assessment.
Arguments: $ARGUMENTS (required) - Topic, plan, decision, or question to probe for blind spots
Output: Structured blind spot analysis with findings, pattern map, limitations, and specific next steps output directly to the conversation (Write is available if the user requests the output be saved to a file)
Before proceeding, load the relevant skill documents for reference:
docs/skills/assumption-surfacing.md — SAST methodology, importance/certainty matrix, pre-mortem protocol, red team thinking, common blind spot checklistdocs/skills/unknown-unknowns.md — Johari Window, negative space analysis, cross-domain transfer validation, horizon scanning, signal-to-noise triage, Cynefin categorizationUse Read to load these files from the repository root.
If $ARGUMENTS is empty:
Use AskUserQuestion to ask: "What topic, plan, or decision should I probe for blind spots?"
Provide 3-4 contextual suggestions based on the current codebase or recent conversation context.
Otherwise:
$ARGUMENTSEstablish a baseline of what is already known before probing for what's missing.
Web research (if available): Use WebSearch to survey the current state of knowledge on the topic:
"<topic> challenges problems""<topic> risks considerations""<topic> lessons learned""<topic> failure" OR "<topic> postmortem"Use WebFetch on the most authoritative 2-3 results.
Codebase analysis (if relevant):
Graceful degradation: If web search is unavailable, rely on built-in knowledge and codebase analysis. Note this honestly in the Coverage field.
Output the baseline:
KNOWN TERRITORY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Known knowns: <what is well-established about this topic>
Known unknowns: <open questions the community already recognizes>
User checkpoint: Use AskUserQuestion to ask:
"What's your familiarity with <topic>? This helps me calibrate the analysis."
Options:
Apply five probing techniques. Each technique attacks the topic from a different angle. Track which technique surfaces each finding for the pattern map.
Technique A: Assumption Surfacing
Surface the implicit assumptions underlying the topic, plan, or decision.
Guard against THE FLAT LIST — do not surface assumptions without rating them. Twenty unranked items are useless. The importance/certainty matrix is mandatory.
Technique B: Pre-Mortem
Use prospective hindsight to surface failure modes.
Critical: Use "has failed" language, not "might fail." The certainty framing is the entire technique. "What might go wrong?" triggers self-censoring; "What went wrong?" activates different cognitive processes and increases failure identification by 30% (Mitchell et al., 1989).
Guard against THE POLITE BRAINSTORM — if the failure modes all sound generic and inoffensive ("it might be slow," "users might not adopt it"), the framing isn't working. Push for specific, vivid, plausible disaster scenarios.
Technique C: Cross-Domain Transfer
Import patterns, warnings, and solutions from adjacent fields.
"<structural pattern> in <adjacent field>""<topic> analogy" OR "<topic> parallels""lessons from <adjacent field> for <topic domain>"Guard against THE SHINY ANALOGY — reject findings based on surface resemblance. "Uber for X" thinking without structural validation produces impressive-sounding but misleading analogies.
Technique D: Negative Space Analysis
Identify what's conspicuously absent from the discussion.
Guard against THE PROJECTION TRAP — anchor the baseline to external sources (similar projects, industry standards, published checklists), not your own expectations. Absences that reflect your biases rather than genuine gaps are worse than useless — they distract from real blind spots.
Technique E: Contrarian Search
Find substantive critiques and dissenting views.
"<topic> criticism" OR "<topic> problems" OR "<topic> overrated""<topic> failed" OR "<topic> mistake" OR "<topic> regret""why not <topic>" OR "<topic> alternatives"Guard against THE DESIGNATED CONTRARIAN — token objections without substance are useless. If the contrarian findings all sound like "well, some people say it's bad," the search isn't working. Push for critiques with genuine reasoning and evidence.
Guard against THE SINGLE SOURCE LEAP — do not treat one contrarian opinion as a confirmed blind spot. Require corroboration from independent sources before escalating.
Filter and organize findings from all five techniques.
Output the structured analysis directly in the conversation:
BLIND SPOT ANALYSIS: <TOPIC>
══════════════════════════════════════════════════════════════
Coverage: <thorough | moderate | surface> — <explanation of what was and wasn't accessible>
Source: <web research | built-in knowledge | codebase analysis | mixed>
KNOWN TERRITORY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Known knowns: <what is well-established>
Known unknowns: <open questions the community already recognizes>
BLIND SPOTS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1. <TITLE> [category]
What: <one-sentence description>
Evidence: <what supports this — source, reasoning, or cross-domain analogy>
Impact: <what happens if this blind spot materializes>
Probe: <which technique surfaced this>
2. <TITLE> [category]
What: <one-sentence description>
Evidence: <what supports this — source, reasoning, or cross-domain analogy>
Impact: <what happens if this blind spot materializes>
Probe: <which technique surfaced this>
[...up to 7-10 findings]
PATTERN MAP
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
<Cluster analysis — do multiple findings converge on the same theme?
If three findings point at the same underlying issue from different
angles, that convergence is the real signal.>
Technique yield:
- Assumption surfacing: N findings
- Pre-mortem: N findings
- Cross-domain transfer: N findings
- Negative space: N findings
- Contrarian search: N findings
LIMITATIONS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
<What couldn't be checked. Honest assessment of coverage gaps.
Examples: "No access to internal organizational knowledge,"
"Web search limited to English-language sources,"
"Cross-domain transfer limited to fields in training data."
This analysis reduces the space of unknown unknowns — it does
not eliminate it.>
NEXT STEPS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
<Specific action per finding — not "investigate further" but
what to investigate, where to look, and who to ask.
Each next step must be anchored to a specific finding.>
SOURCES
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
- <Source with URL>
Quality checks before output:
/spell:blind-spot migrating from PostgreSQL to DynamoDB
/spell:blind-spot our plan to adopt microservices
/spell:blind-spot launching a developer API for our product
/spell:blind-spot switching the team from Scrum to Kanban
/spell:blind-spot using LLMs for automated code review