From dlthub-runtime
Deploys dlt pipelines to dltHub Runtime. Prepares Python scripts by removing dev_mode/limits, verifies dispositions, pins versions; runs dlt CLI deploy/launch/logs. For runtime deployment requests.
npx claudepluginhub dlt-hub/dlthub-ai-workbench --plugin dlthub-runtimeThis skill uses the workspace's default tool permissions.
Assumes (`setup-runtime`) and (`prepare-deployment`) have been completed — workspace is set up, credentials are configured, and runtime login is done.
Verifies dlt workspace readiness for dltHub Runtime by checking pyproject.toml, .dlt/.workspace, dlt[hub] dependency with uv, runtime login, and profile files. Use before first deployment or on prereq errors.
Deploys Databricks jobs, DLT pipelines, and ML models using Declarative Automation Bundles for multi-environment IaC management.
Scaffolds minimal dlt REST API pipeline via dlt init command for rest_api core source or generic HTTP APIs. Excludes sql_database/filesystem sources.
Share bugs, ideas, or general feedback.
Assumes (setup-runtime) and (prepare-deployment) have been completed — workspace is set up, credentials are configured, and runtime login is done.
Review each script being deployed and fix patterns that are safe locally but harmful in production:
dev_mode=True from dlt.pipeline() calls — it drops and recreates the dataset on every run, destroying production data.limit=N parameters, .add_limit(N) calls, or hardcoded date ranges meant for testing. Either remove them or make them configurable (e.g. via dlt.config.value).write_disposition — "replace" is fine for full-refresh pipelines, but confirm the user doesn't actually want "merge" or "append" for incremental loads.if __name__ == "__main__": block — every script must have one or the runtime job does nothing. The block should NOT contain interactive/debug-only code.pyproject.toml — use == not >= to prevent unexpected upgrades on runtime. If user has a pre-release (e.g. 1.23.0a3), use uv pip install to install it and pin with == in pyproject (do NOT use uv add which may downgrade to latest stable).marimo apps):
dlt.attach() (not dlt.pipeline()) and that destination and dataset_name are explicitly passed (this is a temporary limitation of the runtime)altair, ibis-framework, pandas, etc.) are in pyproject.tomlReference: https://dlthub.com/docs/hub/runtime/overview.md
dlt runtime deploy # sync code + config
dlt runtime launch my_pipeline.py # run batch job once (ie pipeline)
dlt runtime serve my_notebook.py # run interactive job (ie. notebook)
dlt runtime logs my_pipeline.py # check output
After deploying:
dlt runtime logsdebug-deployment) to diagnoseNOTE: do not put any pipelines on schedule. This part is coming soon
if __name__ == "__main__": or the job does nothing.pyproject.toml — add all needed packages (e.g. uv add numpy pandas if using .df()).