From ai-analyst
Resumes interrupted analysis pipelines by reading pipeline state from working directories, migrating V1 to V2 formats if needed, and continuing from next ready agents using DAG walker. Useful after context limits, failures, or breaks.
npx claudepluginhub ai-analyst-lab/ai-analyst-plugin --plugin ai-analystThis skill uses the workspace's default tool permissions.
Resume an interrupted analysis pipeline by reading `working/pipeline_state.json`, determining which agents completed, and continuing from the next READY agents using the DAG walker.
Resumes coding sessions by detecting blockers, reconciling STATE.md with filesystem, and suggesting next actions from checkpoints or plans.
Resumes interrupted pipelines from dispatch manifests or replays historical pipelines with template mutations for A/B experimentation. Triggers on /replay, 'resume pipeline', 'replay build', 'A/B test templates'.
Orchestrates 18-agent DAG pipelines for end-to-end data analysis, generating validated findings, charts, and Marp slide decks for deep investigations or presentations.
Share bugs, ideas, or general feedback.
Resume an interrupted analysis pipeline by reading working/pipeline_state.json, determining which agents completed, and continuing from the next READY agents using the DAG walker.
Invoke as /resume-analysis when:
Search for the most recent pipeline state in this order:
working/latest/pipeline_state.json (symlink to latest run).
If found, set RUN_DIR from the symlink target and proceed to Step 2./resume-analysis 2026-02-23_acme-analytics_why-revenue-dropped),
look in working/runs/{id}/pipeline_state.json. Set RUN_DIR accordingly.working/pipeline_state.json (pre-run-directory pipelines).
If found, read it and proceed to Step 2 without a RUN_DIR.Pipeline state fields to extract (V2):
run_id -- identifies this runrun_dir -- per-run directory path (may be absent for legacy runs)dataset -- active datasetquestion -- the business questionstatus -- running, paused, or failedagents -- map of agent-name to agent state (status, output_file, timestamps)After loading the state file and before any processing, check whether the state uses the V1 (step-number keyed) format and migrate it to V2 if needed.
from helpers.pipeline_state import detect_schema_version, migrate_v1_to_v2
if detect_schema_version(state) < 2:
# Resolve dataset from active.yaml or fall back to "unknown"
dataset = state.get("dataset") or resolve_active_dataset() or "unknown"
state = migrate_v1_to_v2(state, dataset=dataset)
# Write migrated state back to disk (same location it was read from)
write_pipeline_state(state_path, state)
print("Migrated pipeline state from V1 -> V2 format")
Migration details (handled by helpers/pipeline_state.py):
pipeline_id (ISO timestamp) -> started_at; generate run_id from date + dataset + question slugsteps.{n}.agent keys -> agents.{agent_name} keyssteps.{n}.output_files[0] -> agents.{name}.output_file (take first)schema_version: 2 and updated_at set to current timestatus: running, it becomes paused at the pipeline level (was interrupted)After migration, continue with the V2 fields listed above.
If no state file exists, scan working/ and outputs/ for artifacts:
| Agent | Expected Artifact | Directory |
|---|---|---|
| question-framing | question_brief_*.md | outputs/ |
| hypothesis | hypothesis_doc_*.md | outputs/ |
| data-explorer | data_inventory_*.md | outputs/ |
| source-tieout | tieout_*.md | working/ |
| descriptive-analytics | analysis_report_*.md | outputs/ |
| root-cause-investigator | investigation_*.md | working/ |
| validation | validation_*.md | outputs/ |
| opportunity-sizer | sizing_*.md | working/ |
| story-architect | storyboard_*.md | working/ |
| narrative-coherence-reviewer | coherence_review_*.md | working/ |
| chart-maker | charts/*.png | outputs/ |
| visual-design-critic | design_review_*.md | working/ |
| storytelling | narrative_*.md | outputs/ |
| deck-creator | deck_*.md | outputs/ |
Walk the list top to bottom. If an artifact exists and looks complete (not empty, no "NEEDS REVISION" markers), mark that agent as completed. Reconstruct a pipeline_state.json from this scan.
agents/registry.yaml to build the dependency graphstate["agents"][agent_name]["status"]:
complete, skipped, or degraded → leave itfailed → reset to pending (will be retried)in_progress or running → reset to pending (was interrupted)status: pending whose every dependency is completeRead each completed agent's output files and extract a brief summary:
Compile into a context block for the resumed session.
Display:
Resuming pipeline {run_id}
Completed agents: {count}
- {agent_name}: {one-line summary from outputs}
- ...
Failed/interrupted agents (will retry): {count}
- {agent_name}: {error or "interrupted"}
Next READY agents: {list}
Resume execution?
On confirmation:
status: running, reset failed/running to pendingpending, not completedpendingfailed. User must investigate before resuming