Deploy Databricks jobs and pipelines with Asset Bundles. Use when deploying jobs to different environments, managing deployments, or setting up deployment automation. Trigger with phrases like "databricks deploy", "asset bundles", "databricks deployment", "deploy to production", "bundle deploy".
From databricks-packnpx claudepluginhub nickloveinvesting/nick-love-plugins --plugin databricks-packThis skill is limited to using the following tools:
Guides Payload CMS config (payload.config.ts), collections, fields, hooks, access control, APIs. Debugs validation errors, security, relationships, queries, transactions, hook behavior.
Designs, audits, and improves analytics tracking systems using Signal Quality Index for reliable, decision-ready data in marketing, product, and growth.
Enforces A/B test setup with gates for hypothesis locking, metrics definition, sample size calculation, assumptions checks, and execution readiness before implementation.
Deploy Databricks jobs and pipelines using Databricks Asset Bundles (DABs). Asset Bundles provide infrastructure-as-code for deploying jobs, notebooks, DLT pipelines, and ML models across workspaces with proper environment isolation and CI/CD integration.
databricks command)databricks.yml bundle configuration# Create new bundle from template
databricks bundle init
# Or manually create databricks.yml
# databricks.yml
bundle:
name: etl-pipeline
workspace:
host: https://myworkspace.cloud.databricks.com
resources:
jobs:
daily_etl:
name: daily-etl-${bundle.environment}
schedule:
quartz_cron_expression: "0 0 6 * * ?"
timezone_id: "America/New_York"
tasks:
- task_key: extract
notebook_task:
notebook_path: ./src/extract.py
new_cluster:
spark_version: "14.3.x-scala2.12"
node_type_id: "i3.xlarge"
num_workers: 2
- task_key: transform
depends_on:
- task_key: extract
notebook_task:
notebook_path: ./src/transform.py
environments:
development:
default: true
workspace:
host: https://dev.cloud.databricks.com
staging:
workspace:
host: https://staging.cloud.databricks.com
production:
workspace:
host: https://prod.cloud.databricks.com
# Validate bundle configuration
databricks bundle validate -e production
# Deploy resources (create/update jobs, notebooks)
databricks bundle deploy -e staging
# Run a specific job
databricks bundle run daily_etl -e staging
# Destroy resources in an environment
databricks bundle destroy -e development
# .github/workflows/deploy.yml
name: Deploy Databricks Bundle
on:
push:
branches: [main]
jobs:
deploy-staging:
runs-on: ubuntu-latest
environment: staging
steps:
- uses: actions/checkout@v4
- uses: databricks/setup-cli@main
- run: databricks bundle validate -e staging
env:
DATABRICKS_HOST: ${{ secrets.DATABRICKS_HOST }}
DATABRICKS_TOKEN: ${{ secrets.DATABRICKS_TOKEN }}
- run: databricks bundle deploy -e staging
env:
DATABRICKS_HOST: ${{ secrets.DATABRICKS_HOST }}
DATABRICKS_TOKEN: ${{ secrets.DATABRICKS_TOKEN }}
deploy-production:
needs: deploy-staging
runs-on: ubuntu-latest
environment: production
steps:
- uses: actions/checkout@v4
- uses: databricks/setup-cli@main
- run: databricks bundle deploy -e production
env:
DATABRICKS_HOST: ${{ secrets.DATABRICKS_PROD_HOST }}
DATABRICKS_TOKEN: ${{ secrets.DATABRICKS_PROD_TOKEN }}
# List deployed jobs
databricks jobs list --output json | jq '.[] | select(.settings.name | contains("etl"))'
# Check recent runs
databricks runs list --job-id $JOB_ID --limit 5
# Get run output
databricks runs get-output --run-id $RUN_ID
| Issue | Cause | Solution |
|---|---|---|
| Bundle validation fails | Invalid YAML | Run databricks bundle validate locally |
| Permission denied | Missing workspace access | Check service principal permissions |
| Cluster start fails | Quota exceeded | Request quota increase or use smaller nodes |
| Job timeout | Long-running task | Set timeout_seconds in job config |
Basic usage: Apply databricks deploy integration to a standard project setup with default configuration options.
Advanced scenario: Customize databricks deploy integration for production environments with multiple constraints and team-specific requirements.
For multi-environment setup, see databricks-multi-env-setup.