From dlthub-runtime
Prepares dltHub Runtime for production by splitting dev/prod secrets with MCP tools and configuring destinations like MotherDuck. Use for data pipeline deployment.
npx claudepluginhub dlt-hub/dlthub-ai-workbench --plugin dlthub-runtimeThis skill uses the workspace's default tool permissions.
Set up profile-scoped credentials and production destinations so the runtime can run pipelines with the right config.
Deploys dlt pipelines to dltHub Runtime. Prepares Python scripts by removing dev_mode/limits, verifies dispositions, pins versions; runs dlt CLI deploy/launch/logs. For runtime deployment requests.
Safely configures and manages dlt secrets in TOML files for API keys, database passwords, tokens. Useful for credential setup requests or Python code using dlt.secrets.
Configures Databricks CLI profiles, Asset Bundle targets, secrets, and Terraform across dev, staging, production environments with workspace and catalog isolation.
Share bugs, ideas, or general feedback.
Set up profile-scoped credentials and production destinations so the runtime can run pipelines with the right config.
Reference: https://dlthub.com/docs/hub/core-concepts/profiles-dlthub.md
.dlt/ config structureRun ls .dlt/*.toml to see which files exist:
.dlt/
├── config.toml # Workspace config (all profiles)
├── secrets.toml # Workspace secrets (all profiles, gitignored)
├── .workspace # Enable profiles and runtime CLI
Per-profile files may exist. You will create some of them below:
├── dev.config.toml # Dev-only config
├── dev.secrets.toml # Dev-only secrets (gitignored)
├── prod.config.toml # Production config
├── prod.secrets.toml # Production secrets (gitignored)
├── access.config.toml # Interactive notebook config
└── access.secrets.toml # Interactive notebook secrets (gitignored)
Use secrets_list, secrets_view_redacted, and secrets_update_fragment MCP tools (or dlt ai secrets CLI as fallback) — see (setup-secrets) skill for details.
secrets_list to see all secret files. Then secrets_view_redacted (no path) for the unified merged view, or with path to inspect individual files.dev profile file via secrets_update_fragment with path=".dlt/dev.secrets.toml".prod profile secrets via secrets_update_fragment with path=".dlt/prod.secrets.toml" — user should fill in production values for their sources and destinations.secrets_view_redacted (unified view) and per-file with path.Offer to set up a production destination. If user is using duckdb, explain why ingested data will not survive to be visible by notebooks (runtime erases ephemeral storage!).
duckdb — offer to set up Motherduck as the production destination.dlt supports most major warehouses, data lakes and pure filesystems.Our goal here is to keep existing dev destination in dev profile, and configure production destination in prod profile. User will be able to continue development as usual while deploying - with the same code!
Learn the concept of named destinations first:
Recommend to user switching to a named destination:
destination to $name for pipelines being deployed (all scripts — including notebooks)STOP before making changes. Show your plan to get OK from user.
Read check_destination.py and run it to verify credentials work:
uv run python .claude/skills/prepare-deployment/check_destination.py <profile> <destination> [dataset_name]
Use secrets_view_redacted to see the final unified view across all workspace secret files. Confirm:
Tell the user the workspace is ready for deployment — use (deploy-workspace) next.