By dlt-hub
Connect to dlt pipelines to profile tables, scan schemas, generate analysis plans with ibis queries and altair charts, then assemble, validate, and launch marimo Python dashboard notebooks for rapid data insights.
npx claudepluginhub dlt-hub/dlthub-ai-workbench --plugin data-explorationThis skill should be used when the user asks to "build the notebook", "launch the dashboard", "generate the marimo notebook", or when an analysis_plan.md artifact exists and the user wants to assemble or regenerate the dashboard. Reads chart specs with ibis queries and altair code from analysis_plan.md, assembles a marimo Python file, validates, and launches. Do NOT use for exploring data or planning charts (use explore-data), building pipelines (use rest-api-pipeline toolkit), or deploying (use dlthub-runtime toolkit).
This skill should be used when the user asks to "explore my data", "what can I learn from this pipeline", "what's the revenue trend", "show me charts", "visualize my pipeline", "analyze my data", "profile data quality", "what questions can I ask about my data", "map my data to business concepts", or wants to explore, profile, analyze, or chart data from a dlt pipeline. Connects to a pipeline, profiles tables or scans schema, plans charts with ibis + altair code, and writes an analysis_plan.md artifact. Do NOT use for building or fixing pipelines (use rest-api-pipeline toolkit), deploying pipelines (use dlthub-runtime toolkit), or assembling the marimo notebook from an analysis plan (use build-notebook).
Transform raw dlt pipeline data into a Canonical Data Model. Build an ontology, design a CDM with Kimball dimensional modeling, write @dlt.hub.transformation functions, and validate the output.
Share bugs, ideas, or general feedback.
Data engineering plugin - warehouse exploration, pipeline authoring, Airflow integration
Data engineering and time series analysis mastery. Expert in jq, SQL, pandas, time series forecasting, ETL pipelines, streaming, and analytics visualization.
Data engineering and ETL tools. Includes 3 specialized agents, 4 commands, and 19 skills.
Write SQL, explore datasets, and generate insights faster. Build visualizations and dashboards, and turn raw data into clear stories for stakeholders.
DataHub development and interaction toolkit with connector planning, PR review, catalog search, metadata enrichment, lineage tracing, data quality management, and connection setup skills