From databricks-pack
Configures Databricks CLI profiles, Asset Bundle targets, secrets, and Terraform across dev, staging, production environments with workspace and catalog isolation.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin databricks-packThis skill is limited to using the following tools:
Configure Databricks across dev, staging, and production with isolated workspaces (or catalog-level isolation), per-environment secrets, Asset Bundle targets, and Terraform for workspace provisioning. Each environment gets its own credentials, Unity Catalog namespace, and compute policies.
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Dynamically discovers and combines enabled skills into cohesive, unexpected delightful experiences like interactive HTML or themed artifacts. Activates on 'surprise me', inspiration, or boredom cues.
Generates images from structured JSON prompts via Python script execution. Supports reference images and aspect ratios for characters, scenes, products, visuals.
Configure Databricks across dev, staging, and production with isolated workspaces (or catalog-level isolation), per-environment secrets, Asset Bundle targets, and Terraform for workspace provisioning. Each environment gets its own credentials, Unity Catalog namespace, and compute policies.
| Environment | Workspace | Catalog | Auth | Compute |
|---|---|---|---|---|
| Development | Shared or dedicated | dev_catalog | Personal PAT | Single-node, 15min auto-stop |
| Staging | Dedicated | staging_catalog | Service principal | Production-like, spot instances |
| Production | Dedicated | prod_catalog | Service principal (OAuth M2M) | Instance pools, auto-scale |
# ~/.databrickscfg
[dev]
host = https://adb-dev-workspace.7.azuredatabricks.net
token = dapi_dev_token
[staging]
host = https://adb-staging-workspace.7.azuredatabricks.net
client_id = staging-sp-client-id
client_secret = staging-sp-secret
[production]
host = https://adb-prod-workspace.7.azuredatabricks.net
client_id = prod-sp-client-id
client_secret = prod-sp-secret
# Use a specific environment
databricks workspace list / --profile staging
databricks clusters list --profile production
# databricks.yml — single project, multiple targets
bundle:
name: data-platform
variables:
catalog:
description: Unity Catalog for this environment
default: dev_catalog
alert_email:
default: dev@company.com
cluster_size:
default: "2X-Small"
targets:
dev:
default: true
mode: development
workspace:
host: https://adb-dev-workspace.7.azuredatabricks.net
root_path: /Users/${workspace.current_user.userName}/.bundle/${bundle.name}/dev
variables:
catalog: dev_catalog
staging:
workspace:
host: https://adb-staging-workspace.7.azuredatabricks.net
root_path: /Shared/.bundle/${bundle.name}/staging
variables:
catalog: staging_catalog
alert_email: staging-alerts@company.com
prod:
mode: production
workspace:
host: https://adb-prod-workspace.7.azuredatabricks.net
root_path: /Shared/.bundle/${bundle.name}/prod
variables:
catalog: prod_catalog
alert_email: oncall@company.com
cluster_size: "Medium"
# Create environment-specific secret scopes in each workspace
for env in dev staging prod; do
databricks secrets create-scope "${env}-secrets" --profile $env
databricks secrets put-secret "${env}-secrets" db-password --profile $env
databricks secrets put-secret "${env}-secrets" api-key --profile $env
done
# Access secrets in notebooks — scope name matches environment
import os
env = os.getenv("ENVIRONMENT", "dev")
db_password = dbutils.secrets.get(scope=f"{env}-secrets", key="db-password")
api_key = dbutils.secrets.get(scope=f"{env}-secrets", key="api-key")
# config/databricks_config.py
from dataclasses import dataclass
import os
@dataclass
class DatabricksEnvConfig:
host: str
catalog: str
secret_scope: str
debug: bool
max_retries: int
timeout_seconds: int
CONFIGS = {
"dev": DatabricksEnvConfig(
host=os.getenv("DATABRICKS_HOST_DEV", ""),
catalog="dev_catalog",
secret_scope="dev-secrets",
debug=True,
max_retries=3,
timeout_seconds=30,
),
"staging": DatabricksEnvConfig(
host=os.getenv("DATABRICKS_HOST_STAGING", ""),
catalog="staging_catalog",
secret_scope="staging-secrets",
debug=False,
max_retries=3,
timeout_seconds=60,
),
"prod": DatabricksEnvConfig(
host=os.getenv("DATABRICKS_HOST_PROD", ""),
catalog="prod_catalog",
secret_scope="prod-secrets",
debug=False,
max_retries=5,
timeout_seconds=120,
),
}
def get_config() -> DatabricksEnvConfig:
env = os.getenv("ENVIRONMENT", "dev")
config = CONFIGS.get(env)
if not config:
raise ValueError(f"Unknown environment: {env}")
if not config.host:
raise ValueError(f"DATABRICKS_HOST_{env.upper()} not set")
return config
# .github/workflows/deploy.yml
name: Deploy Pipeline
on:
push:
branches: [main]
jobs:
deploy-staging:
runs-on: ubuntu-latest
environment: staging
steps:
- uses: actions/checkout@v4
- uses: databricks/setup-cli@main
- run: databricks bundle deploy -t staging
env:
DATABRICKS_HOST: ${{ secrets.DATABRICKS_HOST }}
DATABRICKS_CLIENT_ID: ${{ secrets.DATABRICKS_CLIENT_ID }}
DATABRICKS_CLIENT_SECRET: ${{ secrets.DATABRICKS_CLIENT_SECRET }}
deploy-production:
needs: deploy-staging
runs-on: ubuntu-latest
environment: production # Requires manual approval
steps:
- uses: actions/checkout@v4
- uses: databricks/setup-cli@main
- run: databricks bundle deploy -t prod
env:
DATABRICKS_HOST: ${{ secrets.DATABRICKS_HOST_PROD }}
DATABRICKS_CLIENT_ID: ${{ secrets.DATABRICKS_CLIENT_ID_PROD }}
DATABRICKS_CLIENT_SECRET: ${{ secrets.DATABRICKS_CLIENT_SECRET_PROD }}
# terraform/main.tf
resource "databricks_workspace" "staging" {
provider = databricks.accounts
workspace_name = "data-platform-staging"
aws_region = "us-east-1"
pricing_tier = "PREMIUM"
deployment_name = "data-platform-staging"
managed_services_customer_managed_key_id = var.cmk_id
}
resource "databricks_catalog" "staging" {
provider = databricks.staging
name = "staging_catalog"
comment = "Staging environment catalog"
}
resource "databricks_schema" "staging_bronze" {
provider = databricks.staging
catalog_name = databricks_catalog.staging.name
name = "bronze"
}
~/.databrickscfg)| Issue | Cause | Solution |
|---|---|---|
| Wrong environment targeted | Missing --profile or -t flag | Default profile should always be dev |
| Cross-env data leak | Shared catalog | Use separate catalogs per environment |
| Secret not found | Wrong scope name | Verify scope exists: databricks secrets list-scopes --profile $env |
| CI auth failure | Expired service principal secret | Regenerate OAuth secret or use OIDC |
for profile in dev staging production; do
echo "=== $profile ==="
databricks current-user me --profile $profile 2>/dev/null && echo "OK" || echo "FAILED"
done
config = get_config()
print(f"Environment: {os.getenv('ENVIRONMENT', 'dev')}")
print(f"Catalog: {config.catalog}")
print(f"Debug: {config.debug}")
For deployment, see databricks-deploy-integration.