From claude-swe-workflows
Generates divergent ideas for achieving goals: validates assumptions, spawns isolated brainstormers using first-principles, working-backwards, analogical techniques, synthesizes into idea report. No code.
npx claudepluginhub chrisallenlane/claude-swe-workflows --plugin claude-swe-workflowsThis skill uses the workspace's default tool permissions.
Generates candidate approaches for achieving a goal. Uses parallel brainstormers each applying a different technique in isolation (to avoid anchoring), then synthesizes the pool into a catalog of ideas. The skill is purely *generative* — evaluation, choice, and critique belong to `/think-deliberate` and `/think-scrutinize`.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Generates candidate approaches for achieving a goal. Uses parallel brainstormers each applying a different technique in isolation (to avoid anchoring), then synthesizes the pool into a catalog of ideas. The skill is purely generative — evaluation, choice, and critique belong to /think-deliberate and /think-scrutinize.
This skill produces no tangible artifacts. It is a consultant, not an implementer. No code, no tickets, no commits. The output is a structured catalog of ideas the user can pick from.
Judge (you, running this skill):
Brainstormers: Each receives a specific technique (first-principles, working-backwards, lateral, analogical, constraints-shift, worst-possible-idea, six-hats-green, SCAMPER) and generates ideas within that mode, in isolation from other brainstormers.
The goal may arrive as:
Produce a written brief of the goal as you understand it. Brainstormers operate on this brief. Ambiguity here corrupts everything downstream.
This is a dedicated phase, not opportunistic. Before any generation, extract the assumptions the goal depends on — both stated and unstated — and validate them with the user.
Look for:
Present findings to the user. For each assumption, ask: is this correct, negotiable, or wrong? Update the goal brief based on responses.
Sometimes this phase alone dissolves or reframes the problem. That's a valuable outcome — better to stop here than brainstorm solutions to the wrong problem. If the user wants to proceed anyway, proceed with the refined brief.
Select 3-6 techniques from the palette based on the goal's shape. The orchestrator decides autonomously — the user does not pick techniques.
Available techniques:
Selection heuristics:
Irrelevant techniques are dropped, not forced. Better 3 fitted techniques than 7 forced ones.
Spawn one THK - Brainstormer agent per chosen technique, in parallel. Each receives:
No cross-talk between brainstormers. This is the Nominal Group Technique principle — independent generation first, pooling second. Isolated brainstormers produce more diverse output than coordinated ones (research-backed: open brainstorming anchors on early ideas).
Collect all idea sets.
Combine the isolated idea sets into a coherent catalog:
5a. Deduplicate — when multiple techniques produced structurally the same idea, merge them (preserve technique attribution from all contributors — that's signal: multiple angles landed here).
5b. Cluster — group related ideas by theme or approach. The clusters are often more interesting than individual ideas.
5c. Construct hybrids — look for cross-technique combinations where two ideas together are stronger than either alone. Example: a first-principles rethink combined with an analogical example that shows how it's been done elsewhere. Flag hybrids as constructed (the orchestrator's contribution, not any single agent's).
5d. Surface standouts — identify the 3-7 most interesting ideas across axes:
5e. Drop weak ideas — ideas that fall apart under basic scrutiny don't belong in the catalog. Don't evaluate rigorously (that's /think-scrutinize), but don't pad either.
5f. Note raised questions — the brainstorming exercise often surfaces questions the user hadn't considered. These are frequently the real insight.
Final report format:
## Brainstorm Report
**Goal:** [one-line clarified goal]
**Techniques applied:** [list]
### Validated Assumptions
- [Assumption] — [user's response: confirmed / revised / discarded]
### Standouts
[3-7 most promising/novel/counterintuitive ideas — no technique attribution.
Each stands on its merit.]
1. **[Name]** — [description]
Why it's a standout: [novel / promising / counterintuitive and why]
### Hybrid Ideas
[Cross-technique combinations the orchestrator constructed. Flag as
synthesized, not generated. Each names the parent techniques.]
- **[Hybrid name]** — [description]
Combines: [technique A's idea] + [technique B's idea]
Why the combination matters: [brief rationale]
### Other Reasonable Ideas
[Remaining ideas worth keeping, clustered by theme. Technique attribution
included in the catalog for transparency.]
**Cluster: [theme]**
- [idea] *(first-principles)*
- [idea] *(lateral)*
...
**Cluster: [theme]**
- ...
### Questions the Exercise Raised
[Often the real insight — questions about the goal, the constraints, the
problem framing that emerged through generation.]
### Suggested Next Steps
- To choose among standouts: `/think-deliberate`
- To stress-test a promising idea: `/think-scrutinize`
- To refine the goal and re-brainstorm: re-invoke `/think-brainstorm`
This skill is one-shot. If the user wants to go deeper on a specific direction, they refine the goal and re-invoke. If they want to choose between standouts, they hand off to /think-deliberate. If they want to stress-test one, /think-scrutinize. Each invocation is a clean consultation.
Good fit:
Poor fit:
/think-deliberate/think-scrutinize/scope or /implementBrainstorming is valuable when generation is constrained by convention, anchoring, or exhaustion. The skill formalizes techniques that deliberately break those constraints — and runs them in parallel, isolated, so each produces its own distinct contribution.
The rule is divergence. Evaluation, ranking, and selection happen elsewhere. Here, the goal is to surface possibilities the user hadn't considered — including ones they might initially dismiss.
Osborn's four rules still apply under the hood: defer judgment, quantity over quality, welcome wild ideas, build on others' ideas (the synthesis phase does this for us). But modern research favors Nominal Group Technique over classical group brainstorming — independent generation, then pooling. This skill is NGT with AI.