Identifies and stress-tests implicit assumptions in plans, proposals, and decisions. Use when validating roadmaps, reviewing architecture proposals, assessing timelines, or challenging any plan where hidden assumptions could cause failure. Works on any domain — code, business, strategy, operations.
From kw-pluginnpx claudepluginhub kwiggen/claude-code-plugin --plugin kw-pluginThis skill uses the workspace's default tool permissions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
Systematically surfaces and stress-tests assumptions treated as facts but never validated. Most failures trace back to invalid assumptions — catching them early prevents costly mistakes.
Assumptions about how long things will take.
Red flags: "This should only take...", "If everything goes well...", "The team can absorb this..."
Challenge with:
Assumptions about team capacity and availability.
Red flags: "We'll hire by Q2...", "The team can support this...", "Sarah can lead this while..."
Challenge with:
Assumptions about system capabilities and constraints.
Red flags: "The system can handle the load...", "We can integrate easily...", "Our architecture supports..."
Challenge with:
Assumptions about market, users, and outcomes.
Red flags: "Users want this...", "This will reduce churn by...", "The market will wait..."
Challenge with:
Assumptions about factors outside your control.
Red flags: "The vendor will deliver...", "Regulations won't change...", "The market stays stable..."
Challenge with:
Read the plan and flag statements treated as facts without validation:
For each assumption, determine:
| Factor | Assessment |
|---|---|
| Category | Timeline / Resource / Technical / Business / External |
| Stated or Implicit | Was it acknowledged or hidden? |
| Evidence For | What supports it? |
| Evidence Against | What contradicts it? |
| Risk if Wrong | Impact on timeline, cost, success |
| How to Validate | What would prove or disprove it? |
| Verdict | Valid / Questionable / Invalid / Unknown |
For high-risk assumptions, apply these patterns:
Reality Check — Compare to external data:
"You assume [X]. Industry data shows [Y]. What makes you different?"
History Test — Compare to past performance:
"You assume [X]. Last time you attempted [similar], it took [Y]. What changed?"
Stress Test — Push to failure point:
"You assume [X]. What happens when [stress scenario]?"
Dependency Audit — Trace dependencies:
"For [assumption] to be true, what else must also be true?"
Inverse Test — Consider the opposite:
"If [assumption] is wrong, what's the impact? What's Plan B?"
Focus on assumptions that are:
Red flags that suggest hope rather than evidence:
Use assumption-challenger for surfacing hidden assumptions and stress-testing them with evidence. Use antipattern-detector for recognizing known failure patterns. Both run together in /validate.
Before presenting an analysis, verify:
# Assumption Analysis: [Plan Name]
## Summary
- **Total Assumptions Identified**: [Count]
- **High-Risk**: [Count] | **Medium-Risk**: [Count] | **Low-Risk**: [Count]
## Critical Assumptions (Must Validate Before Proceeding)
### Assumption: [Statement]
**Category**: [Type] | **Stated or Implicit**: [Which]
**The Problem**: [Why questionable]
**Evidence For**: [Supporting evidence]
**Evidence Against**: [Counter-evidence]
**If Wrong**: Timeline: [impact] | Cost: [impact] | Success: [impact]
**How to Validate**: [Method and cost/time]
**Verdict**: Valid / Questionable / Invalid / Unknown
---
## Medium-Risk Assumptions (Should Validate)
[Brief analysis for each]
## Low-Risk Assumptions (Monitor)
[List]
## Recommendations
### Before Proceeding
1. [Validation action]
### Risk Mitigation
1. [Mitigation for critical assumptions]
### Contingency Plans Needed
1. [Plan B for each critical assumption]