From autoloop
Persists feedback on autoloop experiment loop designs across sessions, auto-saving corrections to metrics, quality gates, and strategies. Loads automatically for new loop designs.
npx claudepluginhub joshuaoliphant/claude-plugins --plugin autoloopThis skill uses the workspace's default tool permissions.
Persist feedback about experiment loop design across sessions. Stored feedback is automatically
Runs autonomous optimization loops to iteratively improve prompts, templates, configs, or code using four-way separation of main agent, eval agent, test runner, and deterministic eval.py judge. Invoke via /autoresearch or 'optimize this prompt'.
Enables autonomous loop mode for any project via interactive setup of priorities for bug fixes, features, performance, refactoring, security, tests, and more.
Runs metric-driven iterative optimization loops for code performance, prompts, clustering, search relevance, or other metrics. Builds measurement scaffolding, runs parallel experiments, evaluates via hard gates/LLM judges, iterates to best solution.
Share bugs, ideas, or general feedback.
Persist feedback about experiment loop design across sessions. Stored feedback is automatically loaded when autoloop designs new loops, ensuring design preferences carry forward.
When the user provides feedback on loop design or execution, save it:
echo '{"category": "<category>", "feedback": "<what the user said>", "context": "<optional context>"}' | \
python ${PLUGIN_ROOT}/scripts/feedback_manager.py autoloop save-feedback
Categories: loop_design, metrics, quality_gates, runner_script, time_budget, change_strategy, general
Examples:
{"category": "quality_gates", "feedback": "Include mypy type checking as a quality gate in all Python autoloops"}{"category": "metrics", "feedback": "Avoid test coverage as primary metric — it leads to low-value tests. Use as secondary only"}{"category": "time_budget", "feedback": "Prefer experiments that complete in under 2 minutes for faster iteration"}{"category": "runner_script", "feedback": "Runner scripts should redirect verbose output to stderr, keep stdout for METRIC lines only"}{"category": "change_strategy", "feedback": "Prefer small, focused changes per iteration over ambitious rewrites"}Display all stored feedback:
python ${PLUGIN_ROOT}/scripts/feedback_manager.py autoloop show-feedback
Present as a readable list grouped by category.
Clear all feedback or feedback for a specific category:
# Clear all
python ${PLUGIN_ROOT}/scripts/feedback_manager.py autoloop clear-feedback
# Clear only metrics feedback
python ${PLUGIN_ROOT}/scripts/feedback_manager.py autoloop clear-feedback metrics
Graduate stable feedback into the actual SKILL.md files, making corrections permanent. This is a Claude-driven operation — no script needed.
When to consolidate: When the user says "update the plugin based on feedback", "consolidate feedback", "bake in my preferences", or "graduate feedback into the skill".
Process:
python ${PLUGIN_ROOT}/scripts/feedback_manager.py autoloop show-feedback
Read the target SKILL.md file:
${PLUGIN_ROOT}/skills/autoloop/SKILL.mdFor each feedback entry, determine if it should be consolidated:
Present a consolidation plan to the user:
## Consolidation Plan
**Will bake into SKILL.md** (permanent):
- [feedback] → edit [file]: [what will change]
**Will keep as runtime feedback** (situational):
- [feedback] → reason: [why it stays runtime]
Proceed?
On approval:
python ${PLUGIN_ROOT}/scripts/feedback_manager.py autoloop clear-feedback <category>
Report what changed and what remains as runtime feedback.
When autoloop designs a new experiment loop, it loads all feedback entries and applies them:
This ensures the user never has to repeat the same design preference twice.