From harness-evolver
Finalizes evolver optimization: shows scores/improvements/diffs, tags git commits with metrics, pushes changes, cleans temp files, or promotes insights to CLAUDE.md.
npx claudepluginhub raphaelchristi/harness-evolver --plugin harness-evolverThis skill is limited to using the following tools:
Finalize the evolution results. In v3, the best code is already in the main branch (auto-merged during evolve). Deploy is about cleanup, tagging, and pushing.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Automates semantic versioning and release workflow for Claude Code plugins: bumps versions in package.json, marketplace.json, plugin.json; verifies builds; creates git tags, GitHub releases, changelogs.
Finalize the evolution results. In v3, the best code is already in the main branch (auto-merged during evolve). Deploy is about cleanup, tagging, and pushing.
TOOLS="${EVOLVER_TOOLS:-$([ -d ".evolver/tools" ] && echo ".evolver/tools" || echo "$HOME/.evolver/tools")}"
EVOLVER_PY="${EVOLVER_PY:-$([ -f "$HOME/.evolver/venv/bin/python" ] && echo "$HOME/.evolver/venv/bin/python" || echo "python3")}"
python3 -c "
import json
c = json.load(open('.evolver.json'))
baseline = c['history'][0]['score'] if c['history'] else 0
best = c['best_score']
improvement = best - baseline
print(f'Baseline: {baseline:.3f}')
print(f'Best: {best:.3f} (+{improvement:.3f}, {improvement/max(baseline,0.001)*100:.0f}% improvement)')
print(f'Iterations: {c[\"iterations\"]}')
print(f'Experiment: {c[\"best_experiment\"]}')
"
Show git diff from before evolution started:
git log --oneline --since="$(python3 -c "import json; print(json.load(open('.evolver.json'))['created_at'][:10])")" | head -20
{
"questions": [{
"question": "Evolution complete. What would you like to do?",
"header": "Deploy",
"multiSelect": false,
"options": [
{"label": "Tag and push", "description": "Create a git tag with the score and push to remote"},
{"label": "Just review", "description": "Show the full diff of all changes made during evolution"},
{"label": "Clean up only", "description": "Remove temporary files (trace_insights.json, etc.) but don't push"},
{"label": "Promote learnings", "description": "Add proven evolution insights to CLAUDE.md (permanent knowledge)"}
]
}]
}
If "Tag and push":
VERSION=$(python3 -c "import json; c=json.load(open('.evolver.json')); print(f'evolver-v{c[\"iterations\"]}')")
SCORE=$(python3 -c "import json; print(f'{json.load(open(\".evolver.json\"))[\"best_score\"]:.3f}')")
git tag -a "$VERSION" -m "Evolver: score $SCORE"
git push origin main --tags
If "Just review":
git diff HEAD~{iterations} HEAD
If "Clean up only":
rm -f trace_insights.json best_results.json comparison.json production_seed.md production_seed.json
If "Promote learnings":
$EVOLVER_PY $TOOLS/promote_learnings.py --memory evolution_memory.md --target CLAUDE.md --threshold 5 --dry-run
Show the dry-run output. If the user approves, run without --dry-run.