From thinking-frameworks-skills
Challenges plans, designs, and decisions via adversarial debate and red teaming to expose blind spots, assumptions, and vulnerabilities. Use for pre-launch reviews and worst-case validation.
npx claudepluginhub lyndonkl/claude --plugin thinking-frameworks-skillsThis skill uses the workspace's default tool permissions.
Copy this checklist and track your progress:
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Copy this checklist and track your progress:
Deliberation & Red Teaming Progress:
- [ ] Step 1: Define the proposal and stakes
- [ ] Step 2: Assign adversarial roles
- [ ] Step 3: Generate critiques and challenges
- [ ] Step 4: Synthesize findings and prioritize risks
- [ ] Step 5: Recommend mitigations and revisions
Step 1: Define the proposal and stakes
Ask user for the plan/decision to evaluate (specific proposal, not vague idea), stakes (what happens if this fails), current confidence level (how certain are they), and deadline (when must decision be made). Understanding stakes helps calibrate critique intensity. See Scoping Questions.
Step 2: Assign adversarial roles
Identify critical perspectives that could expose blind spots. Choose 3-5 roles based on proposal type (security, legal, operations, customer, competitor, etc.). Each role has different incentives and concerns. See Adversarial Role Types and resources/template.md for role assignment guidance.
Step 3: Generate critiques and challenges
For each role, generate specific critiques: What could go wrong? What assumptions are questionable? What edge cases break this? Be adversarial but realistic (steelman, not strawman arguments). For advanced critique techniques → See resources/methodology.md for red team attack patterns.
Step 4: Synthesize findings and prioritize risks
Collect all critiques, identify themes (security gaps, operational risks, customer impact, etc.), assess severity and likelihood for each risk. Distinguish between showstoppers (must fix) and acceptable risks (monitor/mitigate). See Risk Prioritization.
Step 5: Recommend mitigations and revisions
For each critical risk, propose concrete mitigation (change the plan, add safeguards, gather more data, or accept risk with monitoring). Present revised proposal incorporating fixes. See Mitigation Patterns for common approaches.
To define the proposal:
To understand stakes:
To calibrate critique:
Choose 3-5 roles that are most likely to expose blind spots for this specific proposal:
Competitor:
Malicious Actor (Security):
Regulator/Auditor:
Investigative Journalist:
Operations/SRE:
Customer/User:
Finance/Budget:
Legal/Compliance:
Engineering/Technical:
Pessimist:
Contrarian:
Long-term Thinker:
After generating critiques, prioritize by severity and likelihood:
Critical (5): Catastrophic failure (data breach, regulatory fine, business shutdown) High (4): Major damage (significant revenue loss, customer exodus, reputation hit) Medium (3): Moderate impact (delays, budget overrun, customer complaints) Low (2): Minor inconvenience (edge case bugs, small inefficiency) Trivial (1): Negligible (cosmetic issues, minor UX friction)
Very Likely (5): >80% chance if we proceed Likely (4): 50-80% chance Possible (3): 20-50% chance Unlikely (2): 5-20% chance Rare (1): <5% chance
Showstoppers (score ≥ 15): Must address before proceeding High Priority (score 10-14): Should address, or have strong mitigation plan Monitor (score 5-9): Accept risk but have contingency Accept (score < 5): Acknowledge and move on
| Severity ↓ / Likelihood → | Rare (1) | Unlikely (2) | Possible (3) | Likely (4) | Very Likely (5) |
|---|---|---|---|---|---|
| Critical (5) | 5 (Monitor) | 10 (High Priority) | 15 (SHOWSTOPPER) | 20 (SHOWSTOPPER) | 25 (SHOWSTOPPER) |
| High (4) | 4 (Accept) | 8 (Monitor) | 12 (High Priority) | 16 (SHOWSTOPPER) | 20 (SHOWSTOPPER) |
| Medium (3) | 3 (Accept) | 6 (Monitor) | 9 (Monitor) | 12 (High Priority) | 15 (SHOWSTOPPER) |
| Low (2) | 2 (Accept) | 4 (Accept) | 6 (Monitor) | 8 (Monitor) | 10 (High Priority) |
| Trivial (1) | 1 (Accept) | 2 (Accept) | 3 (Accept) | 4 (Accept) | 5 (Monitor) |
For each identified risk, choose mitigation approach:
1. Revise the Proposal (Change Plan)
2. Add Safeguards (Reduce Likelihood)
3. Reduce Blast Radius (Reduce Severity)
4. Contingency Planning (Prepare for Failure)
5. Gather More Data (Reduce Uncertainty)
6. Accept and Monitor (Informed Risk)
7. Delay/Cancel (Avoid Risk Entirely)
Skip red teaming if:
Use instead:
Process:
Common adversarial roles:
Risk prioritization:
Resources:
Deliverable: deliberation-debate-red-teaming.md with critiques, risk assessment, and mitigation recommendations