Evolve novel algorithms through LLM-driven mutation, crossover, and selection
Evolve novel algorithms through LLM-driven mutation, crossover, and selection with adaptive optimization.
/plugin marketplace add asermax/claude-plugins/plugin install superpowers@asermax-plugins<problem description>Evolve novel algorithms through LLM-driven mutation and selection with true genetic recombination. Runs adaptively—continuing while improvement is possible, stopping when plateaued.
This is the master skill that analyzes the request and delegates to specialized subskills.
| Mode | Subskill | Optimizes | Use When |
|---|---|---|---|
| perf | /evolve-perf | Runtime speed (ops/sec, latency) | Faster algorithms, benchmarks |
| size | /evolve-size | Length (bytes, chars) | Code golf, minimal configs |
| ml | /evolve-ml | Model accuracy (F1, loss) | ML optimization (coming soon) |
/evolve <problem description>
/evolve <problem description> --mode=<perf|size|ml>
/evolve --resume
You are the master /evolve skill. Your job is to understand the user's intent and delegate to the appropriate subskill.
If the request contains --mode=perf, --mode=size, or --mode=ml, use that mode directly. No further analysis needed.
If the request is --resume or contains --resume:
.evolve/*/evolution.json file"mode" field--resumeRead the user's request carefully and determine what they want to optimize:
Choose SIZE mode when the goal is to minimize length:
Choose PERF mode when the goal is to maximize speed:
Choose ML mode when the goal is to improve model metrics:
If you're unsure, you may check the codebase for context clues:
code-golf/ or tasks/*.json suggest SIZE modebenchmark.rs or perf harnesses suggest PERF mode.h5, .pt, .pkl, model.py suggest ML modeIf after analysis you genuinely cannot determine the mode, use AskUserQuestion:
Question: "What are we optimizing for?"
Options:
- "Fastest runtime (speed)" → perf
- "Smallest code (bytes)" → size
- "Best accuracy (ML)" → ml
Once you've determined the mode:
**Evolution mode: {mode}** with brief reasoningperf → invoke evolve-perfsize → invoke evolve-sizeml → invoke evolve-mlRequest: "shortest Python solution for ARC task 0520fde7"
Reasoning: "shortest" + "ARC task" = clearly minimizing code length
Mode: size
Action: Skill(evolve-size, "shortest Python solution for ARC task 0520fde7")
Request: "faster sorting algorithm to beat std::sort"
Reasoning: "faster" + "beat benchmark" = clearly optimizing speed
Mode: perf
Action: Skill(evolve-perf, "faster sorting algorithm to beat std::sort")
Request: "improve accuracy on this classification task"
Reasoning: "accuracy" + "classification" = clearly optimizing model metrics
Mode: ml
Action: Skill(evolve-ml, "improve accuracy on this classification task")
Request: "--mode=size optimize this function"
Reasoning: Explicit --mode=size overrides any inference
Mode: size
Action: Skill(evolve-size, "optimize this function")
Request: "optimize this algorithm"
Reasoning: "optimize" is ambiguous - could mean speed OR size
Action: AskUserQuestion to clarify
Request: "--resume"
Action: Find .evolve/*/evolution.json, read mode, delegate with --resume
| Budget | Meaning | Approx. Generations |
|---|---|---|
10k | 10,000 tokens | ~2-3 generations |
50k | 50,000 tokens | ~10-12 generations |
100k | 100,000 tokens | ~20-25 generations |
5gen | 5 generations | Fixed count |
unlimited | No limit | Until plateau |
| (none) | Default 50k | ~10-12 generations |
Run /evolve --resume to continue a previous evolution:
evolution.json┌─────────────────────────────────────────────────────────────┐
│ /evolve <request> │
│ │
│ ┌───────────────────────────────────────────────────────┐ │
│ │ Mode Detection (LLM-based) │ │
│ │ • Check for explicit --mode= override │ │
│ │ • Analyze request intent │ │
│ │ • Consider codebase context if needed │ │
│ │ • Ask user if genuinely ambiguous │ │
│ └──────────────────┬────────────────────────────────────┘ │
│ │ │
│ ┌───────────┼───────────┬───────────┐ │
│ ▼ ▼ ▼ ▼ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ size │ │ perf │ │ ml │ │ resume │ │
│ │ subskill │ │ subskill │ │ subskill │ │ (detect) │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
│ │
│ Each subskill runs the full evolution loop: │
│ • Bootstrap → Baseline → Evolution → Finalize │
│ │
└─────────────────────────────────────────────────────────────┘
/evolve-perf for full documentation/evolve-size for full documentation/evolve-ml for planned featuresAll evolution modes use a consistent directory structure:
.evolve/<problem>/
├── evolution.json # Full state (mode, population, history)
├── champion.json # Best solution manifest
├── generations.jsonl # Per-generation log (append-only)
├── mutations/ # All tested mutations
└── [mode-specific]/ # Mode-specific artifacts
├── rust/ # (perf) Rust benchmark code
├── solutions/ # (size) Working solutions by size
└── models/ # (ml) Trained models
| Want to... | Command |
|---|---|
| Make code faster | /evolve faster <algorithm> |
| Make code shorter | /evolve shortest <code> |
| Minimize config file | /evolve minimal <file type> |
| Continue previous | /evolve --resume |
| Force specific mode | /evolve --mode=<mode> <problem> |