From skillkit-frameworks
Performs adversarial reviews on brainstorming, product plans, and technical architecture: validates claims via 3+ web searches, stresses executor capabilities, enforces min 3 bugs with severity, offers 3 resolution paths.
npx claudepluginhub rfxlamia/skillkit --plugin skillkit-frameworksThis skill uses the workspace's default tool permissions.
Adversarial Review consists of 4 mandatory stages executed sequentially.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Analyzes competition with Porter's Five Forces, Blue Ocean Strategy, and positioning maps to identify differentiation opportunities and market positioning for startups and pitches.
Adversarial Review consists of 4 mandatory stages executed sequentially.
Stage 1: Reality vs Claims → web_search + web_fetch (min 3x diverse)
Stage 2: Acceptance Criteria → stress test executor
Stage 3: Mandatory Bug Quota → minimum 3 specific issues
Stage 4: Interactive Resolution → categorization + 3 resolution options
Core principle: A good review isn't about throwing criticism, but transforming findings into decisions.
Objective: Validate claims and assumptions in the document against real data and libraries.
Angle 1: Library/tech being used → find known limitations, version issues
Angle 2: User pain point → find real UX research or forum complaints
Angle 3: Benchmark/performance → find real data, not marketing claims
Angle 4: Competitor/alternative → find if problem has been solved elsewhere
Angle 5: Production failure → find post-mortems or known failure modes
**Claim:** "[claim from document]"
**Status:** VALID / PARTIAL / INVALID / UNVERIFIED
**Facts:** [data from search]
**Hidden caveat:** [not mentioned in document]
Objective: Not just whether the idea can be done, but whether this specific executor can do it.
1. Does the chosen tech stack match the executor's skills?
2. Are there components requiring significant learning curve?
3. Is the time estimate realistic for this executor (not team)?
4. Are there external dependencies outside the executor's control?
5. Is there a proof-of-concept needed before committing to production?
**Component:** [component/bet name]
**Verdict:** Executable / Partial / Needs PoC first / Beyond capability
**Reason:** [specific, honest]
**Recommendation:** [concrete steps if there's a gap]
Objective: Force discovery of non-obvious problems. Prevent reviews that are too soft.
MINIMUM 3 specific issues must be found.
If < 3 found → SYSTEM MUST SEARCH AGAIN in:
- Edge cases: what happens during extreme conditions?
- Performance issues: what happens at scale/high load?
- Architecture violations: is this design consistent?
- Dependency risk: what happens if library changes?
- UX failure mode: what happens when user does unexpected things?
| Level | Definition | Example |
|---|---|---|
| 🔴 Critical | Will kill the product/project if not fixed | Core promise cannot be delivered |
| 🟠 High | Will cause significant problems in production | Performance degradation, bad UX |
| 🟡 Medium | Real issue but has workaround | File clutter, minor inconsistency |
| 🟢 Low | Needs attention but not urgent | Naming convention, minor inefficiency |
### [EMOJI] [LEVEL] #N — [Short Title]
**Problem:** [specific description, not generic]
**Trigger scenario:** [when this issue occurs]
**Impact:** [what happens to user/product]
**Why it's dangerous:** [why this isn't an edge case that can be ignored]
Objective: Every finding must have a resolution path, not just criticism.
A. Auto-fix
→ Directly fix the problematic idea/plan
→ Output: new version of the fixed section
→ Suitable for: problems with clear solutions
B. Action Items
→ Checklist to be executed by user
→ Output: [ ] specific item with done criteria
→ Suitable for: problems requiring user decision
C. Deep Dive
→ Detailed problem explanation with concrete examples
→ Output: in-depth analysis + trade-offs
→ Suitable for: problems requiring understanding before deciding
| # | Issue | Severity | Primary Recommendation |
|---|-------|----------|------------------------|
| 1 | [title] | 🔴 Critical | [one sentence fix] |
| 2 | [title] | 🟠 High | [one sentence fix] |
# Adversarial Review — [Document/Project Name]
## Stage 1 — Reality vs Claims
[Results from 3+ web searches with diverse angles]
[Status per claim: VALID/PARTIAL/INVALID/UNVERIFIED]
## Stage 2 — Acceptance Criteria
[Stress test per component/bet]
## Stage 3 — Mandatory Bug Quota (minimum 3)
[Bug #1 — Critical/High/Medium/Low]
[Bug #2 — ...]
[Bug #3 — ...]
## Stage 4 — Interactive Resolution
[Per bug: option A / B / C]
### Summary Priority Matrix
| # | Issue | Severity | Primary Recommendation |
|---|-------|----------|------------------------|
## Meta-Conclusion
[One paragraph: what changed the most? first step priority?]
Do's:
Don'ts:
Trigger phrases:
"run adversarial review protocol for [document]"
"adversarial review this"
"validate this brainstorming hard"
"stress test this plan"
"find weaknesses in [plan/architecture]"