Analyze ML experiment results, compute statistics, generate comparison tables and insights. Use when user says "analyze results", "compare", or needs to interpret experimental data.
npx claudepluginhub llv22/autoresearchwitheyesThis skill is limited to using the following tools:
Analyze: $ARGUMENTS
Performs strict statistical analysis on ML/AI experimental results, generates real scientific figures, checks significance, validates comparisons, and produces analysis bundles.
Analyzes experiment results from tables, stats, or descriptions to generate LaTeX discussion paragraphs for academic papers via two-phase workflow: extracts findings for user confirmation, then writes grounded analysis.
Maintains persistent ML experiment journals in Markdown files, logging hypotheses, changes, results, metrics, and learnings across sessions.
Share bugs, ideas, or general feedback.
Analyze: $ARGUMENTS
Find all relevant JSON/CSV result files:
figures/, results/, or project-specific output directoriesOrganize results by:
For each finding, structure as:
If findings are significant:
Always include: