Explore different architectural approaches in parallel using hosted LLM for code generation. No restrictions on approach - fastest path to comparing real implementations. Part of speed-run pipeline.
Generates multiple code architecture variants in parallel using hosted LLMs to quickly compare implementations via real tests.
npx claudepluginhub 2389-research/claude-pluginsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Explore different approaches in parallel - no restrictions, fastest path to comparing real implementations. Each variant uses hosted LLM for code generation.
Announce: "I'm using speed-run:any% for parallel exploration via hosted LLM."
Core principle: When unsure of the best architecture, implement multiple approaches fast via hosted LLM and let real code + tests determine the winner.
| Phase | Description |
|---|---|
| 1. Context | Quick gathering (1-2 questions max) |
| 2. Approaches | Generate 2-5 architectural approaches |
| 3. Plan | Create implementation plan per variant |
| 4. Implement | Dispatch ALL agents in SINGLE message, each uses hosted LLM |
| 5. Evaluate | Scenario tests -> fresh-eyes -> judge survivors |
| 6. Complete | Finish winner, cleanup losers |
docs/plans/<feature>/
design.md # Shared context
speed-run/
any-percent/
variant-<slug>/
plan.md # Implementation plan for this variant
result.md # Final report
.worktrees/
speed-run-variant-<slug>/ # Variant worktree
Gather just enough to generate approaches (1-2 questions max):
Do NOT brainstorm extensively. The point of any% is to explore fast, not deliberate slowly.
Identify the primary architectural axis (biggest impact decision):
Generate 2-5 approaches along that axis:
Based on the requirements, here are 3 approaches to explore:
1. variant-sqlite: SQLite storage with query builder
→ Pros: Simple, embedded, zero config
→ Cons: Single writer, no replication
2. variant-postgres: PostgreSQL with ORM
→ Pros: Scalable, ACID, rich queries
→ Cons: External dependency, more setup
3. variant-redis: Redis with persistence
→ Pros: Fast, built-in pub/sub
→ Cons: Memory-bound, limited queries
All will be implemented via hosted LLM and tested.
Spawning 3 parallel variants now.
Variant limits: Max 5-6. Don't do full combinatorial explosion:
Setup worktrees:
.worktrees/speed-run-variant-sqlite/
.worktrees/speed-run-variant-postgres/
.worktrees/speed-run-variant-redis/
Branches:
<feature>/speed-run/variant-sqlite
<feature>/speed-run/variant-postgres
<feature>/speed-run/variant-redis
CRITICAL: Dispatch ALL variants in a SINGLE message
<single message>
Task(variant-sqlite, run_in_background: true)
Task(variant-postgres, run_in_background: true)
Task(variant-redis, run_in_background: true)
</single message>
Variant agent prompt:
You are implementing the [VARIANT-SLUG] variant in a speed-run any% exploration.
Other variants are being implemented in parallel with different approaches.
**Your working directory:** /path/to/.worktrees/speed-run-variant-<slug>
**Design context:** docs/plans/<feature>/design.md
**Your plan location:** docs/plans/<feature>/speed-run/any-percent/variant-<slug>/plan.md
**Your approach:** [APPROACH DESCRIPTION]
- [Key architectural decisions for this variant]
- [Technology choices specific to this variant]
**Your workflow:**
1. Create implementation plan for YOUR approach
- Save to plan location above
- Focus on what makes this approach unique
2. For each implementation task, use hosted LLM for first-pass code generation:
- Write a contract prompt (DATA CONTRACT + API CONTRACT + ALGORITHM + RULES)
- Call: mcp__speed-run__generate_and_write_files
- Run tests
- Fix failures with Claude Edit tool (surgical 1-4 line fixes)
- Re-test until passing
3. Follow TDD
4. Use verification before claiming done
**Code generation rules:**
- Use mcp__speed-run__generate_and_write_files for algorithmic code
- Use Claude direct ONLY for surgical fixes and multi-file coordination
- Write contract prompts with exact data models, routes, and algorithm steps
**Report when done:**
- Plan created: yes/no
- All tasks completed: yes/no
- Test results (output)
- Files changed count
- Hosted LLM calls made
- Fix cycles needed
- What makes this variant's approach unique
- Any issues encountered
Step 1: Gate check - All tests pass
Step 2: Run same scenario tests against all variants
Use scenario-testing skill. Same scenarios, different implementations.
Step 3: Fresh-eyes on survivors
Fresh-eyes review of variant-sqlite (N files)...
Fresh-eyes review of variant-postgres (N files)...
CRITICAL: Invoke speed-run:judge now.
The judge skill contains the full scoring framework with checklists. Invoking it fresh ensures the scoring format is followed exactly.
Invoke: speed-run:judge
Context to provide:
- Variants to judge: variant-sqlite, variant-postgres, variant-redis
- Worktree locations: .worktrees/speed-run-variant-<slug>/
- Test results from each variant
- Scenario test results
- Fresh-eyes findings
- Speed-run metrics: hosted LLM calls, fix cycles, generation time per variant
The judge skill will:
Do not summarize or abbreviate the scoring. The judge skill output should be the full worksheet.
Any%-specific context: In any%, variants explore different architectural approaches, so Fitness differences are expected and valid. A Fitness gap here reflects different design trade-offs, not deviation from a shared design. Weight Craft and Spark higher when approaches are fundamentally different.
Winner: Use finish-branch
Losers: Cleanup
git worktree remove .worktrees/speed-run-variant-sqlite
git worktree remove .worktrees/speed-run-variant-redis
git branch -D <feature>/speed-run/variant-sqlite
git branch -D <feature>/speed-run/variant-redis
Write result.md:
# Any% Results: <feature>
## Approaches Explored
| Variant | Approach | Tests | Scenarios | Fresh-Eyes | LLM Calls | Result |
|---------|----------|-------|-----------|------------|-----------|--------|
| variant-sqlite | Embedded SQL | 18/18 | 5/5 | 0 issues | 3 | WINNER |
| variant-postgres | External DB + ORM | 20/20 | 5/5 | 1 minor | 4 | eliminated |
| variant-redis | In-memory + persist | 16/16 | 4/5 | 2 issues | 5 | eliminated |
## Winner Selection
Reason: Simplest architecture, zero external dependencies, all scenarios pass
## Token Savings
Estimated savings vs Claude direct: ~60% on code generation
Save to: docs/plans/<feature>/speed-run/any-percent/result.md
| Dependency | Usage |
|---|---|
writing-plans | Generate implementation plan per variant |
git-worktrees | Create isolated worktree per variant |
parallel-agents | Dispatch all variant agents in parallel |
scenario-testing | Run same scenarios against all variants |
fresh-eyes | Quality review on survivors |
judge | speed-run:judge - scoring framework (bundled) |
finish-branch | Handle winner, cleanup losers |
Too many variants
Extensive brainstorming before exploring
Using Claude direct for all code generation
Dispatching variants in separate messages
Skipping scenario tests
Forgetting cleanup
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.