Analyzes ML/AI experiment results with rigorous statistics, validates data artifacts, generates real scientific figures, and outputs structured bundles: reports, stats appendix, figure catalog.
From claude-scholarnpx claudepluginhub galaxy-dawn/claude-scholar --plugin claude-scholarThis skill uses the workspace's default tool permissions.
USAGE.mdexamples/example-analysis-report.mdexamples/example-figure-catalog.mdexamples/example-stats-appendix.mdreferences/analysis-depth.mdreferences/common-pitfalls.mdreferences/figure-interpretation.mdreferences/statistical-methods.mdreferences/statistical-reporting.mdreferences/visualization-best-practices.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides agent creation for Claude Code plugins with file templates, frontmatter specs (name, description, model), triggering examples, system prompts, and best practices.
Run strict, evidence-first experimental analysis for ML/AI research.
Use this skill to produce a strict analysis bundle:
analysis-report.mdstats-appendix.mdfigure-catalog.mdfigures/Do not use this skill to draft a paper Results section or a full experiment wrap-up report. Those belong to ml-paper-writing or results-report.
Results prose,If the user wants the complete post-experiment summary report, hand off to results-report after this bundle is ready.
Start by identifying:
csv, json, tsv, logs),Validate:
If the comparison is not statistically valid, say so before continuing.
Before running statistics, define the exact comparison questions:
Do not mix unrelated comparisons into one undifferentiated table.
Always produce:
mean ± std when appropriate,95% CI or another clearly justified interval,Default expectation:
See:
references/statistical-methods.mdreferences/statistical-reporting.mdProduce actual figures whenever artifacts are available.
Minimum expectation for a non-trivial analysis bundle:
Every main figure must define:
See:
references/visualization-best-practices.mdreferences/figure-interpretation.mdanalysis-report.mdSummarize:
stats-appendix.mdRecord:
figure-catalog.mdFor each figure, record:
Do not finish until all are true:
Results draft is included.analysis-output/
├── analysis-report.md
├── stats-appendix.md
├── figure-catalog.md
└── figures/
├── figure-01-main-comparison.pdf
├── figure-02-ablation.pdf
└── ...
For every major figure, answer all three questions:
If a figure cannot answer question 3, it is probably decorative rather than scientific.
When inputs are incomplete, say so explicitly.
Examples:
Never replace missing evidence with confident prose.
Load only what is needed:
references/statistical-methods.md - test selection and assumptionsreferences/statistical-reporting.md - minimum reporting standardreferences/visualization-best-practices.md - publication-quality figure rulesreferences/figure-interpretation.md - how to explain figures with evidencereferences/analysis-depth.md - move from observation to mechanism and decisionreferences/common-pitfalls.md - common analysis and reporting failuresexamples/example-analysis-report.mdexamples/example-stats-appendix.mdexamples/example-figure-catalog.md