ALWAYS invoke FIRST on ANY "build/create/implement/add feature" request. This skill WRAPS brainstorming - it decides the approach. Present choice BEFORE any brainstorming starts: (1) Brainstorm together step-by-step, OR (2) Omakase - I generate 3-5 best approaches, implement in parallel, tests pick winner. If user picks brainstorming, check if brainstorming skill exists - if yes use it, if no do brainstorming yourself (ask questions, propose options, validate). Also triggers DURING brainstorming on 2+ "not sure"/"don't know" responses. For same design competing implementations, use cookoff.
This skill inherits all available tools. When active, it can use any tool Claude has access to.
Chef's choice exploration - when you're not sure WHAT to build, explore different approaches in parallel.
Part of Test Kitchen Development:
omakase-off - Chef's choice exploration (different approaches/plans)cookoff - Same recipe, multiple cooks compete (same plan, multiple implementations)Core principle: Let indecision emerge naturally during brainstorming, then implement multiple approaches in parallel to let real code + tests determine the best solution.
This skill orchestrates other skills. Check what's installed and use fallbacks if needed.
| Reference | Primary (if installed) | Fallback (if not) |
|---|---|---|
brainstorming | superpowers:brainstorming | Ask questions one at a time, propose 2-3 approaches, validate incrementally |
writing-plans | superpowers:writing-plans | Write detailed plan with file paths, code examples, verification steps |
git-worktrees | superpowers:using-git-worktrees | git worktree add .worktrees/<name> -b <branch>, verify .gitignore |
parallel-agents | superpowers:dispatching-parallel-agents | Dispatch multiple Task tools in single message, review when all return |
subagent-dev | superpowers:subagent-driven-development | Fresh subagent per task, code review between tasks |
tdd | superpowers:test-driven-development | Write test first, watch fail, write minimal code, refactor |
scenario-testing | scenario-testing:skills (2389) | Create .scratch/ E2E scripts, real dependencies, no mocks |
verification | superpowers:verification-before-completion | Run verification command, read output, THEN claim status |
fresh-eyes | fresh-eyes-review:skills (2389) | 2-5 min review for security, logic errors, edge cases |
code-review | superpowers:requesting-code-review | Dispatch code-reviewer subagent with SHA range |
finish-branch | superpowers:finishing-a-development-branch | Verify tests, present options (merge/PR/keep/discard) |
At skill start: Check which dependencies are available. Announce: "Using [X] for brainstorming, [Y] for scenario testing..." so user knows which tools are active.
If primary not installed: Use the fallback behavior described above. The fallback is the essential behavior - primary skills just formalize it.
Omakase-off has TWO trigger points:
When user requests "build X", "create Y", "implement Z" - BEFORE diving into detailed brainstorming, offer the short-circuit:
Before we brainstorm the details, would you like to:
1. Brainstorm together - We'll explore requirements and design step by step
2. Omakase (chef's choice) - I'll generate 3-5 best approaches, implement them
in parallel, and let tests pick the winner
→ Best when: you're flexible on approach, want to see options in code
Which approach?
When to offer this: On any substantial "build/create/implement" request before starting brainstorming.
If user shows uncertainty during brainstorming:
Detection signals:
When detected, offer omakase:
You seem flexible on the approach. Would you like to:
1. I'll pick what seems best and continue brainstorming
2. Explore multiple approaches in parallel (omakase-off)
→ I'll implement 2-3 variants and let tests decide
User directly asks:
When user requests "build/create/implement X":
Present the choice BEFORE starting detailed brainstorming:
Before we dive into the details, how would you like to approach this?
1. Brainstorm together - We'll explore requirements and design step by step
2. Omakase (chef's choice) - I'll generate 3-5 best approaches, implement
them in parallel, and let tests pick the winner
→ Best when: you're flexible, want to see options in working code
Which approach?
If user picks Omakase (option 2):
If user picks Brainstorm (option 1): Continue to Phase 1 below.
First, check if a brainstorming skill is available:
superpowers:brainstorming or similar skill in available skillsFallback brainstorming flow (when no brainstorming skill):
During brainstorming (whether using skill or fallback), passively detect indecision:
Detect indecision signals when user responds to options:
When indecision detected:
Slot classification:
| Type | Examples | Worth exploring? |
|---|---|---|
| Architectural | Storage engine, framework, auth method, API style | Yes - different code paths |
| Trivial | File location, naming conventions, config format | No - easy to change later |
Only architectural decisions become real slots.
Fast path detection: After 2+ uncertain answers in a row, offer to shortcut:
You seem flexible on the details. Want me to:
1. Make sensible defaults and you flag anything wrong
2. Continue exploring each decision
Which works better?
If user picks defaults, make reasonable choices and note them. Continue brainstorming for only the big architectural decisions.
Track throughout:
Architectural slots:
1. Data storage: [JSON, SQLite] - meaningfully different code
Trivial (defaulted):
- File location: ~/.todos.json (easy to change)
- Config format: JSON (doesn't affect architecture)
Fixed decisions:
- Language: TypeScript (user chose)
If NO architectural slots were collected (user was decisive):
Hand off to cookoff for implementation. Present options:
Design complete. How would you like to implement?
1. Cookoff (recommended) - N parallel agents, each creates own plan, pick best
→ Complexity: [assess from design]
→ Best for: medium-high complexity, want to compare implementations
2. Single subagent - One agent plans and implements
3. Local - Implement here in this session
If user picks cookoff, invoke cookoff skill and exit omakase-off.
If slots WERE collected (user showed indecision):
I noticed some open decisions during our brainstorm:
- Storage: JSON vs SQLite
- Auth: JWT vs session-based
Would you like to:
1. **Explore in parallel** - I'll implement both variants and let tests decide
2. **Best guess** - I'll pick what seems best and proceed with one plan
Which approach?
If "Best guess": Pick the most suitable option for each slot, proceed with single plan (hand off to cookoff with the options above).
If "Explore in parallel": Continue to Phase 2 (omakase-off's variant exploration).
Combination limits (max 5-6 implementations):
When multiple architectural slots exist, don't do full combinatorial explosion. Instead, pick meaningfully different variants:
Example: 3 slots (storage: JSON/SQLite, framework: Express/Fastify, auth: JWT/session)
variant-json-simple: JSON + Express + session (lightweight stack)variant-sqlite-robust: SQLite + Fastify + JWT (production-ready stack)variant-json-jwt: JSON + Express + JWT (if JWT is important to test)Pruning rules:
For each variant combination:
superpowers:writing-plansdocs/plans/<feature>/
design.md # Shared context from brainstorming
omakase/
variant-<slug-1>/
plan.md # Implementation plan for this variant
variant-<slug-2>/
plan.md
result.md # Final report (written at end)
Setup worktrees (separate from plans directory):
git-worktrees dependency for each variant.worktrees/ or per project convention<feature>/omakase/<variant-name> (e.g., auth/omakase/jwt-redis)CRITICAL: Dispatch ALL variants in a SINGLE message
Use parallel-agents pattern. Send ONE message with multiple Task tool calls:
<single message>
Task(variant-json, run_in_background: true)
Task(variant-sqlite, run_in_background: true)
</single message>
Do NOT send separate messages for each variant.
Subagent prompt template:
Implement variant-<name> in .worktrees/variant-<name>
- Read plan from docs/plans/<feature>/omakase/variant-<name>/plan.md
- Use `subagent-dev` to execute plan
- Follow `tdd` for each task
- Use `verification` before claiming done
- Report: what was built, test results, any issues
Each subagent workflow:
docs/plans/<feature>/omakase/variant-<name>/plan.mdsubagent-dev (fresh context per task, review between)tdd - write test first, watch fail, implement, passverification - run tests, read output, THEN claim completeMonitor progress:
Implementation status:
- variant-json: 3/5 tasks complete
- variant-sqlite: 2/5 tasks complete (in progress)
User can manually kill slow/stuck implementations.
When subagent reports complete:
code-review to review the variant's implementationStep 1: Scenario testing (REQUIRED - not ad-hoc)
scenario-testing - not manual ad-hoc verification.scratch/ that exercise real functionalityStep 2: Fresh-eyes review on survivors
For each variant that passed scenarios, use fresh-eyes:
Starting fresh-eyes review of variant-json (N files). This will take 2-5 minutes.
Checking for:
- Security vulnerabilities (SQL injection, XSS, command injection)
- Logic errors (off-by-one, race conditions, null handling)
- Edge cases tests might have missed
- Code quality issues
Fresh-eyes complete for variant-json: [N] issues found.
Fresh-eyes findings become judge input:
Step 3: Elimination
| Situation | Action |
|---|---|
| Fails tests | Eliminated |
| Fails scenarios | Eliminated |
| Critical security issue in fresh-eyes | Eliminated (or fix first) |
| Crashes/stalls | User can eliminate |
| All fail | Report failures, ask user how to proceed |
| One survives | Auto-select |
Step 4: Judge (comparing survivors)
If multiple survivors, compare using:
| Input | Source | Weight |
|---|---|---|
| Scenario results | Step 1 | Pass/fail gate |
| Fresh-eyes findings | Step 2 | Quality signal |
| Code complexity | Line count, dependencies | Simplicity preference |
| Test coverage | Test count, coverage % | Confidence signal |
Present comparison to user:
Both variants passed scenarios. Fresh-eyes review:
- variant-json: 0 issues, 450 lines, 12 tests
- variant-sqlite: 1 minor issue (magic number), 520 lines, 15 tests
Recommendation: variant-json (simpler, cleaner review)
Pick winner: [1] json [2] sqlite [3] show me the code
LLM Judge (future): Automate the comparison above with structured scoring.
Before declaring winner, use verification:
Running final verification on winner (variant-json):
- npm test: 12/12 passing
- npm run build: exit 0
- Scenarios: all passing
Verification complete. Winner confirmed.
Winner: Use finish-branch dependency
Losers: Cleanup
git worktree remove <worktree-path>
git branch -D <feature>/omakase/<variant>
Write result.md:
# Omakase-Off Results: <feature>
## Variants
| Variant | Tests | Scenarios | Fresh-Eyes | Result |
|---------|-------|-----------|------------|--------|
| variant-json | 12/12 | PASS | 0 issues | WINNER |
| variant-sqlite | 15/15 | PASS | 1 minor | eliminated |
## Winner Selection
Reason: Both passed, but variant-json was simpler (fewer lines, cleaner fresh-eyes)
## Judge Inputs
- Scenario results: Both passed
- Fresh-eyes: json=clean, sqlite=1 minor issue
- Complexity: json=450 lines, sqlite=520 lines
## Cleanup
Worktrees removed: 1
Branches deleted: todo/omakase/variant-sqlite
Plans preserved: docs/plans/<feature>/omakase/
Save to: docs/plans/<feature>/omakase/result.md
Uses dependencies defined above. Primary skill if installed, fallback otherwise.
| Dependency | Phase | Usage |
|---|---|---|
brainstorming | 1 | Modified flow with passive slot detection |
writing-plans | 2 | Generate implementation plan per variant |
git-worktrees | 3 | Create isolated worktree per variant |
parallel-agents | 3 | Dispatch all variant subagents in parallel |
subagent-dev | 3 | Each subagent uses this for task execution |
tdd | 3 | Subagents follow RED-GREEN-REFACTOR |
verification | 3, 5 | Before claiming variant done; before declaring winner |
code-review | 3 | Review each variant after implementation |
scenario-testing | 4.1 | Run same scenarios against all variants |
fresh-eyes | 4.2 | Quality review on survivors → input for judge |
finish-branch | 5 | Handle winner (merge/PR), cleanup losers |
Too many slots
Not using existing skills
Ad-hoc scenario testing
scenario-testing skill with .scratch/ scenarios, no mocksPremature elimination
Forgetting cleanup
User: "I need to build a CLI todo app."
Claude: I'm using omakase-off.
Before we dive into the details, how would you like to approach this?
1. Brainstorm together - We'll explore requirements and design step by step
2. Omakase (chef's choice) - I'll generate 3-5 best approaches, implement
them in parallel, and let tests pick the winner
User: "1"
Claude: Great, let's brainstorm. What language would you like to use?
User: "not sure"
Claude: [Marks language as potential slot - architectural]
No problem. For data storage?
User: "no idea"
Claude: [Marks storage as potential slot - architectural]
[Detects 2+ uncertain answers]
You seem flexible on the details. Want me to:
1. Make sensible defaults and you flag anything wrong
2. Continue exploring each decision
User: "1"
Claude: Got it. Here's my proposal:
**Defaults:**
- Language: TypeScript (great CLI ecosystem)
- CLI framework: Commander (standard choice)
- File location: ~/.todos.json (trivial - easy to change)
**Worth exploring (architecturally different):**
- Storage: JSON file vs SQLite
JSON is simpler, SQLite handles larger datasets. Both are valid.
Does this look right? Anything to change?
User: "looks good"
Claude: Great. One architectural slot: storage (JSON vs SQLite).
Would you like to:
1. Explore in parallel - I'll implement both and let tests decide
2. Best guess - I'll pick JSON (simpler) and proceed
User: "1"
[Phase 2: Generate plans for variant-json, variant-sqlite]
[Phase 3: Create worktrees, dispatch parallel subagents]
[Phase 4: Both pass scenarios, present comparison]
[Phase 5: User picks sqlite, cleanup json branch]
Winner: variant-sqlite
Cleanup complete. Ready to merge or create PR.