Runs structured brainstorm and ideation sessions with named frameworks. Matches: "brainstorm ideas for", "help me brainstorm", "brainstorm this", "let's brainstorm", "ideation session", "think through options for", "structured thinking about", "SCAMPER analysis", "six thinking hats on", "pros and cons of", "help me decide between", "generate ideas for", "brainstorm ways to", "explore options for", "what are my options". Do NOT use for: prompt optimization or rewriting (use prompt-master) — e.g. "improve my prompt", "optimize this for GPT", creating visualizations or diagrams (use visualize) — e.g. "draw a flowchart", "chart this data", "create a diagram", general decision-making without ideation (just answer directly) — e.g. simple "should I do X?" with obvious answer, meeting preparation or debrief (use meeting-prep / meeting-debrief), task management or planning (use task-manager / daily-plan).
From tandemnpx claudepluginhub binatrixai/tandem-marketplace --plugin tandemThis skill is limited to using the following tools:
evals/evals.jsonreferences/frameworks.mdtemplate.mdVerifies tests pass on completed feature branch, presents options to merge locally, create GitHub PR, keep as-is or discard; executes choice and cleans up worktree.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Executes pre-written implementation plans: critically reviews, follows bite-sized steps exactly, runs verifications, tracks progress with checkpoints, uses git worktrees, stops on blockers.
Run structured ideation sessions with named frameworks, approval gates at every
phase boundary, and a clear divergent-to-convergent arc. Output follows
${CLAUDE_SKILL_DIR}/template.md and saves to ~/Tandem/creative/brainstorms/.
For framework operational details, load ${CLAUDE_SKILL_DIR}/references/frameworks.md
during Step 2 (framework selection).
See METHODOLOGY.md language mirror rule. Reply in the user's language.
Before generating any ideas, understand the problem through targeted questions. Ask one question at a time via AskUserQuestion. Do NOT batch questions.
Q1: Core problem or opportunity
AskUserQuestion: "What's the core problem or opportunity you're exploring?"
Free text response. Listen for signals about problem type (improving something, choosing between options, multi-angle analysis) to inform framework selection.
Q2: Constraints and must-haves
AskUserQuestion: "What constraints or must-haves should I know about?"
Options: ["None -- wide open", "I'll list them now"]
If user lists constraints, capture them for the output file and to scope ideation.
Q3: Audience or stakeholder
AskUserQuestion: "Who's the audience or stakeholder for this?"
Options: ["Just me -- personal decision", "My team", "External clients/users", "I'll specify"]
This shapes idea framing and evaluation criteria.
After all three questions, synthesize the problem statement internally before proceeding. Do NOT generate ideas yet.
Auto-select a framework based on problem type mapping:
| Problem Signal | Recommended Framework |
|---|---|
| Improving an existing product, process, or workflow | SCAMPER |
| Need to analyze from multiple perspectives, balanced view | Six Thinking Hats |
| Choosing between 2+ defined options, evaluating tradeoffs | Pros/Cons/Risks |
Load framework details from ${CLAUDE_SKILL_DIR}/references/frameworks.md.
Present the recommendation with reasoning via AskUserQuestion:
AskUserQuestion: "Based on your [problem type], I recommend [Framework] because [reason]. Ready to start?"
Options: ["Yes, use [Framework]", "Switch to SCAMPER", "Switch to Six Thinking Hats", "Switch to Pros/Cons/Risks"]
This is Gate 1 -- do NOT proceed without explicit confirmation.
If the user switches frameworks, load the new framework's details from
references/frameworks.md before continuing.
Generate 5-7 ideas using the selected framework's structure.
SCAMPER: Generate 1 idea per lens (7 lenses = 7 ideas). Each idea applies
one SCAMPER lens to the problem. See references/frameworks.md for lens details.
Six Thinking Hats: Generate 1-2 insights per hat (6 hats = 6-12 insights). Each insight represents one perspective on the problem.
Pros/Cons/Risks: Generate 3-5 items per column (Pros, Cons, Risks) for each option the user is evaluating.
Format each idea as:
**Idea N: [One-line title]**
[2-3 sentence explanation of the idea, how it applies the framework lens, and
why it could work given the stated constraints]
Present all ideas at once after generation. Do NOT present them one at a time.
Present the complete idea set and ask for confirmation before converging:
AskUserQuestion: "Here are [N] ideas using [Framework]. Ready to evaluate, or want me to adjust?"
Options: ["Evaluate these", "Generate 3 more", "Drop idea [N] and replace", "I want to modify the direction"]
This is Gate 2 -- do NOT proceed to convergent evaluation without explicit confirmation.
If user asks for more ideas or modifications:
Transition explicitly to the convergent phase. Make the shift visible to the user.
5a: Establish evaluation criteria
Suggest 3 default criteria based on problem type:
| Problem Type | Suggested Criteria |
|---|---|
| Product/process improvement | Feasibility, Impact, Effort |
| Multi-perspective analysis | Alignment, Risk, Novelty |
| Option evaluation | Cost, Benefit, Risk |
AskUserQuestion: "I suggest evaluating ideas on [Criterion 1], [Criterion 2], and [Criterion 3]. Want different criteria?"
Options: ["Use these criteria", "I'll suggest my own", "Add a 4th criterion"]
5b: Rate each idea
Rate every idea against each criterion using High / Medium / Low.
Present as a table:
| Idea | [Criterion 1] | [Criterion 2] | [Criterion 3] | Overall |
|------|----------------|----------------|----------------|---------|
| 1. [Title] | High | Medium | Low | Medium |
| 2. [Title] | High | High | Medium | High |
...
5c: Present ranked results
Sort by overall rating. Present the top 3:
"Here are the top 3 based on [criteria]. Which resonates most?"
AskUserQuestion: "Which ideas do you want to keep as your top picks?"
Options: ["Top 3 as ranked", "Just #1", "I'll pick my own set", "Re-evaluate with different criteria"]
Compile the full brainstorm summary using ${CLAUDE_SKILL_DIR}/template.md.
Present to user for final confirmation:
AskUserQuestion: "Here's your brainstorm summary with [N] top picks. Save this?"
Options: ["Save as-is", "Edit before saving", "Add more detail to top picks"]
This is Gate 3 -- confirm before writing any file.
If user wants edits, apply them and re-present Gate 3.
1. Save brainstorm file:
Write to ~/Tandem/creative/brainstorms/brainstorm-YYYY-MM-DD-slug.md using
the template format. The slug is: lowercase problem title, spaces to hyphens,
strip special characters, truncate to 40 characters.
Create the directory if it does not exist.
2. Log to stats.json:
Read ~/Tandem/stats.json. If it does not exist, create it as [].
Parse the JSON array, append a new entry, and write it back:
{
"type": "brainstorm",
"action": "completed",
"count": 1,
"timeSavedMinutes": 15,
"description": "Brainstorm: [slug] via [framework]",
"timestamp": "<current ISO 8601 UTC>"
}
3. Run /sync:
After appending to stats.json, follow the /sync workflow from
tandem-skills/core/sync/SKILL.md to rebuild ~/Tandem/dashboard.html with
updated statistics. If /sync fails, continue -- the brainstorm file is the
primary deliverable.
4. Confirm to user:
Report the saved file path: "Brainstorm saved to ~/Tandem/creative/brainstorms/brainstorm-YYYY-MM-DD-slug.md"
After confirming the save, offer one-time cross-skill handoffs if the brainstorm output lends itself to visualization or prompt creation:
AskUserQuestion: "Your brainstorm is saved. Would you like to take it further?"
Options: ["Visualize as mind map", "Create a prompt from the top idea", "Done"]
Rules:
Memory is user-triggered only. Offer to remember: