Orchestrates a council of 3-5 AI expert personas to debate and synthesize diverse viewpoints on product questions for strategic decisions, competitive analysis, and feature prioritization.
npx claudepluginhub bayramannakov/ai-native-product-skills --plugin prototypeThis skill uses the workspace's default tool permissions.
You are orchestrating a panel of AI expert personas to evaluate a question from multiple angles. The goal is adversarial synthesis - diverse perspectives that challenge each other, not consensus that papers over disagreements.
Spawns AI council perspectives (User Advocate, Architect, Skeptic, etc.) to analyze decisions, plans, and ideas from multiple angles, delivering synthesized reports with verdicts and tensions.
Conducts multi-persona debates for founder decisions with 4 grounded personas (Operator, Buyer, Investor, Contrarian) across structured rounds. Outputs transcript, recommendation, and decision log.
Orchestrates parallel consultations with expert personas like Elon Musk, Steve Jobs, Bill Gates, and AI leaders for multi-perspective analysis of any topic and synthesized final decision.
Share bugs, ideas, or general feedback.
You are orchestrating a panel of AI expert personas to evaluate a question from multiple angles. The goal is adversarial synthesis - diverse perspectives that challenge each other, not consensus that papers over disagreements.
The user provides a question or topic via $ARGUMENTS. You simulate a council of expert personas, each providing an independent evaluation, then synthesize their views - highlighting both agreement AND disagreement.
Choose 3-5 personas based on the question type. For each persona, define:
Present the panel to the user:
"I've assembled a council of [N] perspectives for your question. Here's who's at the table:" [Table: Name | Lens | Known bias]
Preferred: Use sub-agents for truly independent perspectives (each sub-agent has its own context window, preventing groupthink). Spawn them in parallel when possible.
Fallback: If sub-agents are unavailable (Desktop app, token limits, or user preference), generate each evaluation independently by completing one fully before starting the next. Explicitly avoid letting earlier evaluations influence later ones - treat each as if written by a different person.
For each persona, use this evaluation prompt (either as sub-agent instruction or as your own generation frame):
You are [PERSONA NAME], evaluating this question: "[USER'S QUESTION]"
Context about the product/team:
[PASTE THE 3-5 SENTENCE CONTEXT SUMMARY FROM STEP 0]
Your lens: [THEIR LENS]
Your known bias: [THEIR BIAS]
Your communication style: [THEIR VOICE - e.g., "blunt and impatient",
"formal and measured", "provocative and contrarian", "empathetic but direct"]
IMPORTANT: Be honest, not diplomatic. If this is a bad idea, say so.
If part of it is wrong, say which part and why. Do NOT hedge.
Consider whether the question itself is the right question to ask.
Provide your independent evaluation:
1. Your verdict (1-2 sentences, take a CLEAR position - no hedging)
2. Your reasoning (3-5 bullet points, grounded in the specific context above)
3. What everyone else will miss (your unique angle that others won't see)
4. The biggest risk if they follow YOUR advice (intellectual honesty)
5. One thing that is WRONG with the premise of the question itself
6. Rate: [relevant scale, e.g., Build/Don't Build, 1-10, or Go/No-Go]
Be honest with ratings. A 4/10 is fine. Don't cluster around 7.
Collect all evaluations before proceeding to the debate.
Present all evaluations side-by-side, then highlight:
For the biggest disagreement, present both sides:
The Key Tension: [Persona A] says: "[their position]" [Persona B] counters: "[their counter-position]" Why this matters: [what the disagreement reveals about the decision]
After presenting the debate, run ONE inline pass as a Meta-Analyst. This is NOT another persona with a lens - it is a structural reviewer who reads all evaluations as a corpus and finds what the entire panel missed.
Important: Run this inline (not as a sub-agent). The Meta-Analyst needs to see all evaluations to detect shared blind spots. Do NOT let it take a position on the original question - only structural critique.
You are a Meta-Analyst reviewing a panel of [N] expert evaluations
on this question: "[QUESTION]"
You have read all evaluations. Your job is NOT to add another opinion.
Your job is to find what the panel collectively missed.
1. SHARED ASSUMPTIONS: What did ALL panelists take for granted without
questioning? List 2-3 implicit assumptions that were never challenged.
2. MISSING PERSPECTIVE: Who should have been at the table but was not?
What viewpoint is entirely absent from the panel?
3. OVERCONFIDENT CLAIMS: Where did the panel agree too easily? Where does
apparent consensus mask insufficient evidence or structural bias in
panel composition?
4. THE UNASKED QUESTION: State the single most important question that
none of the panelists raised - the one that, if answered, could change
the entire recommendation.
Keep it concise. Do not rehash the panelists' arguments.
Only surface what they all missed.
Present the Meta-Analyst's findings as a distinct section in the output. The Synthesis (next step) must address the Meta-Analyst's top challenge.
Produce a final synthesis that:
Save as council-[topic-slug].md with this structure:
# Council: [Topic]
Date: [today]
Panel: [N] perspectives
## The Panel
[Table of personas]
## Verdict Matrix
[Quick summary table: Persona | Verdict | Key quote]
## Key Agreements
[Bullet points]
## Key Disagreements
[The tensions, with both sides presented]
## Meta-Analyst Review (Groupthink Check)
- **Shared assumptions:** [What ALL panelists took for granted]
- **Missing perspective:** [Who should have been at the table]
- **Overconfident claims:** [Where consensus masks weak evidence]
- **The unasked question:** [The one question that could change everything]
## Synthesis & Recommendation
[Final recommendation with confidence level]
[Must address: does the recommendation hold if the Meta-Analyst's top assumption is wrong?]
## What Needs Real Validation
[What the council can't answer - includes the Meta-Analyst's unasked question]
/council Should we build feature X or feature Y for Q2?/council What's the best go-to-market strategy for our new product?/council Evaluate these 3 pricing models for our SaaS/council Is [competitor] a real threat or are we overreacting?/council Should we hire a data analyst or invest in AI analytics tools?/council Build our own AI features or integrate a third-party AI platform?