From data-exploration
Assembles marimo Python dashboard notebooks from analysis_plan.md specs with ibis queries and altair charts, validates with marimo check, manages dependencies, and launches.
npx claudepluginhub dlt-hub/dlthub-ai-workbench --plugin data-explorationThis skill uses the workspace's default tool permissions.
Read a `<date>_<pipeline_name>_analysis_plan.md` artifact and assemble a marimo notebook with all charts.
Creates, edits, runs, and converts marimo reactive Python notebooks with @app.cell decorators. Guides reactivity, CLI usage, UI components, SQL integration, and Jupyter migration for EDA.
Edits, debugs, verifies, and converts marimo reactive Python notebooks using strict cell rules, CLI commands, and output validation workflows.
Generates marimo notebooks as Python files with @app.cell structure for interactive apps, script mode testing, and browser editing.
Share bugs, ideas, or general feedback.
Read a <date>_<pipeline_name>_analysis_plan.md artifact and assemble a marimo notebook with all charts.
Parse $ARGUMENTS:
spec-path (optional): path to the analysis_plan.md file. If omitted, look for *_analysis_plan.md in the working directory. If multiple found, ask the user and stop.Parse the analysis plan file for:
If the analysis plan file is missing or has no charts, tell the user to run explore-data first and stop.
Generate <pipeline_name>_dashboard.py. Read references/notebook-patterns.md for the complete notebook structure, cell templates, and naming conventions before generating. Every chart cell must end with _chart on a bare line, then return — without the bare expression line, nothing renders.
For general marimo cell structure, reactivity, and best practices, fetch https://github.com/marimo-team/skills/tree/main/skills/marimo-notebook. For SQL-specific patterns in marimo, fetch https://github.com/marimo-team/skills/blob/main/skills/marimo-notebook/references/SQL.md. For dlt-dashboard-specific templates, see references/notebook-patterns.md.
Run marimo's linter to catch common mistakes:
uvx marimo check <pipeline_name>_dashboard.py
If validation fails:
uvx marimo check until it passes.The notebook requires pandas, numpy, and altair which are not installed by dlt[workspace]. Before launching, check if they are available and if any are missing, ask the user how they want to install them:
pyproject.toml — preferred for production projects: uv add pandas numpy altairuv pip install pandas numpy altairAlso add marimo if not already installed, and ibis-framework[duckdb] if any chart uses ibis.
Do NOT install packages without user confirmation.
After validation passes, offer to launch in browser or skip.
If yes:
uv run marimo edit <pipeline_name>_dashboard.py --no-token
Tell the user the notebook is running (default: localhost:2718).
When re-invoked after iteration (see workflow.md): re-read the full analysis_plan.md and regenerate the entire notebook file, then validate and relaunch.
Two cells export the same variable name. Fix: follow the naming conventions in references/notebook-patterns.md.
A dependency is missing from the environment. Install it with uv pip install <package> and re-check.
The SQL query returns no rows. Common causes:
where clauses.get_table_schema.row_counts.Pipeline name is wrong or pipeline hasn't been run. Run dlt pipeline <name> info to verify.
Input: 2026-03-10_orders_pipeline_analysis_plan.md with 2 charts (Monthly Revenue Trend + Revenue by Category)
Output: orders_pipeline_dashboard.py with 2 data cells + 2 chart cells, structured per references/notebook-patterns.md.
Validation: uvx marimo check orders_pipeline_dashboard.py → passes
Launch: uv run marimo edit orders_pipeline_dashboard.py --no-token