Help us improve
Share bugs, ideas, or general feedback.
From autoresearch-agent
Run a single experiment iteration. Edit the target file, evaluate, keep or discard.
npx claudepluginhub ciciliaeth/claude-skills --plugin autoresearch-agentHow this skill is triggered — by the user, by Claude, or both
Slash command
/autoresearch-agent:runThe summary Claude sees in its skill listing — used to decide when to auto-load this skill
Run exactly ONE experiment iteration: review history, decide a change, edit, commit, evaluate.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Creates, reads, edits, and analyzes .docx files using docx-js for new documents, pandoc for text extraction, Python scripts for XML unpacking/validation/changes, and LibreOffice for conversions.
Share bugs, ideas, or general feedback.
Run exactly ONE experiment iteration: review history, decide a change, edit, commit, evaluate.
/ar:run engineering/api-speed # Run one iteration
/ar:run # List experiments, let user pick
If no experiment specified, run python {skill_path}/scripts/setup_experiment.py --list and ask the user to pick.
# Read experiment config
cat .autoresearch/{domain}/{name}/config.cfg
# Read strategy and constraints
cat .autoresearch/{domain}/{name}/program.md
# Read experiment history
cat .autoresearch/{domain}/{name}/results.tsv
# Checkout the experiment branch
git checkout autoresearch/{domain}/{name}
Review results.tsv:
Strategy escalation:
Edit only the target file specified in config.cfg. Change one thing. Keep it simple.
git add {target}
git commit -m "experiment: {short description of what changed}"
python {skill_path}/scripts/run_experiment.py \
--experiment {domain}/{name} --single
Read the script output. Tell the user:
After every 10th experiment (check results.tsv line count), update the Strategy section of program.md with patterns learned.