By llv22
Autonomous ML research pipeline: idea discovery → experiment → review → paper writing
npx claudepluginhub llv22/autoresearchwitheyesAutonomous multi-round research review loop. Repeatedly reviews via Codex MCP, implements fixes, and re-reviews until positive assessment or max rounds reached. Use when user says "auto review loop", "review until it passes", or wants autonomous iterative improvement.
Workflow 1: Full idea discovery pipeline. Orchestrates research-lit → idea-creator → novelty-check → research-reviewer to go from a broad research direction to validated, pilot-tested ideas. Use when user says "idea discovery pipeline" or wants the complete idea exploration workflow.
Workflow 3: Full paper writing pipeline. Orchestrates paper-plan → paper-figure → paper-write → paper-compile → paper-improver to go from a narrative report to a polished, submission-ready PDF. Use when user says "write paper pipeline", "paper writing", or wants the complete paper generation workflow.
Full research pipeline: Workflow 1 (idea discovery) → implementation → Workflow 2 (auto review loop). Goes from a broad research direction to validated, reviewed research. Use when user says "full pipeline", "end-to-end research", or wants the complete autonomous research lifecycle. Does NOT include paper writing (Workflow 3) — invoke /autor.paper-writing separately after this completes.
Download and set up venue-specific LaTeX templates. Supports iclr2026, neurips2026, icml2026, emnlp2026, or a custom venue. Use when user says "download template", "setup venue", or wants to configure a new conference template.
Use this agent when the paper-writing pipeline needs to iteratively improve a compiled paper. Runs REVIEWER_MODEL xhigh review, implements fixes, and recompiles for MAX_IMPROVEMENT_ROUNDS rounds to polish writing quality, fix theoretical inconsistencies, and soften overclaims.
Use this agent when the idea-discovery pipeline needs external critical feedback on research ideas, papers, or experimental results. Invokes REVIEWER_MODEL via Codex MCP with xhigh reasoning to act as a senior ML reviewer.
Analyze ML experiment results, compute statistics, generate comparison tables and insights. Use when user says "analyze results", "compare", or needs to interpret experimental data.
Generate and rank research ideas given a broad direction. Use when user says "找idea", "brainstorm ideas", "generate research ideas", "what can we work on", or wants to explore a research area for publishable directions.
Monitor running experiments, check progress, collect results. Use when user says "check results", "is it done", "monitor", or wants experiment output.
Verify research idea novelty against recent literature. Use when user says "查新", "novelty check", "有没有人做过", "check novelty", or wants to verify a research idea is novel before implementing.
Compile LaTeX paper to PDF, fix errors, and verify output. Use when user says "编译论文", "compile paper", "build PDF", "生成PDF", or wants to compile LaTeX into a submission-ready PDF.
Generate publication-quality figures and tables from experiment results. Use when user says "画图", "作图", "generate figures", "paper figures", or needs plots for a paper.
Generate a structured paper outline from review conclusions and experiment results. Use when user says "写大纲", "paper outline", "plan the paper", "论文规划", or wants to create a paper plan before writing.
Draft LaTeX paper section by section from an outline. Use when user says "写论文", "write paper", "draft LaTeX", "开始写", or wants to generate LaTeX content from a paper plan.
Search and analyze research papers, find related work, summarize key ideas. Use when user says "find papers", "related work", "literature review", "what does this paper say", or needs to understand academic papers.
Deploy and run ML experiments on local or remote GPU servers. Use when user says "run experiment", "deploy to server", "跑实验", or needs to launch training jobs.
Oh My Paper research harness: memory system, Codex delegation, and pipeline commands for academic research projects.
Uses power tools
Uses Bash, Write, or Edit tools
Share bugs, ideas, or general feedback.
Three AI models, one synthesis — multi-model research workflow for scientific domains
Guardrails your research workflow — checks hypotheses, catches known bugs, flags sloppy methodology.
Strategic research thinking agents — idea evaluation, project triage, and structured brainstorming inspired by Carlini's research methodology
Scientific research agent extension - turns research goals into reproducible Jupyter notebooks with Python REPL, data analysis, and ML workflows
Synapse research orchestration plugin for Claude Code. Connects AI agents to Synapse for experiment execution, literature search, progress reporting, and autonomous research loops.