Monitor running experiments, check progress, collect results. Use when user says "check results", "is it done", "monitor", or wants experiment output.
npx claudepluginhub llv22/autoresearchwitheyesThis skill is limited to using the following tools:
Monitor: $ARGUMENTS
Displays experiment dashboard with results, active loops, progress, metrics, and status. Supports single/domain views and Markdown/CSV exports via /ar:status.
Manages ML experiment lifecycle via YAML registry: register experiments, record benchmarks, compare runs, track status. For Python ML research metadata without databases or job launching.
Creates, runs, and analyzes Arize experiments using ax CLI for evaluating, comparing, and benchmarking AI model performance on datasets.
Share bugs, ideas, or general feedback.
Monitor: $ARGUMENTS
ssh <server> "screen -ls"
For each screen session, capture the last N lines:
ssh <server> "screen -S <name> -X hardcopy /tmp/screen_<name>.txt && tail -50 /tmp/screen_<name>.txt"
If hardcopy fails, check for log files or tee output.
ssh <server> "ls -lt <results_dir>/*.json 2>/dev/null | head -20"
If JSON results exist, fetch and parse them:
ssh <server> "cat <results_dir>/<latest>.json"
Present results in a comparison table:
| Experiment | Metric | Delta vs Baseline | Status |
|-----------|--------|-------------------|--------|
| Baseline | X.XX | — | done |
| Method A | X.XX | +Y.Y | done |