From lllllllama-ai-paper-reproduction-skill
Executes authorized exploratory deep learning experiments like small-subset validations, batch sweeps, idle-GPU searches, and transfer-learning trials in research repos, ranking outputs in explore_outputs/.
npx claudepluginhub lllllllama/ai-research-workflow-skillsThis skill uses the workspace's default tool permissions.
- When the researcher explicitly authorizes exploratory runs.
Orchestrates end-to-end candidate-only AI research exploration atop current_research, with auditable repo understanding, idea gating, bounded code adaptation, and governed experiments to explore_outputs/.
Autonomously runs deep learning experiments 24/7 in a THINK-EXECUTE-REFLECT loop with zero-cost GPU monitoring, Leader-Worker architecture, and constant-size memory.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Share bugs, ideas, or general feedback.
ai-research-explore instead when the task spans both current_research coordination and exploratory code changes.minimal-run-and-audit or run-train.cost, success_rate, and expected_gain.selection_weights.max_variants and max_short_cycle_runs.variant_axes to define the candidate dimension grid.subset_sizes and short_run_steps to express exploratory run scale.selection_weights to rebalance cost, success_rate, and expected_gain.primary_metric and metric_goal so downstream ranking can order executed candidates consistently.explore_outputs/CHANGESET.mdexplore_outputs/TOP_RUNS.mdexplore_outputs/status.jsonUse references/execution-policy.md, ../../references/explore-variant-spec.md, scripts/plan_variants.py, and scripts/write_outputs.py.