By databricks
Rapidly build, deploy, and manage Databricks workloads using AI skills: scaffold React/TS apps and dashboards, automate CLI ops and DABs for multi-env deploys, create Lakeflow Spark pipelines and Lakebase Postgres DBs, deploy scalable Model Serving endpoints, and migrate to serverless compute.
npx claudepluginhub databricks/databricks-agent-skillsBuild apps on Databricks Apps platform. Use when asked to create dashboards, data apps, analytics tools, or visualizations. Evaluates data access patterns (analytics vs Lakebase synced tables) before scaffolding. Invoke BEFORE starting implementation.
Databricks CLI operations: auth, profiles, data exploration, and bundles. Contains up-to-date guidelines for Databricks-related CLI tasks.
Create, configure, validate, deploy, run, and manage DABs — Declarative Automation Bundles (formerly Databricks Asset Bundles) — for Databricks resources including dashboards, jobs, pipelines, alerts, volumes, and apps
Develop and deploy Lakeflow Jobs on Databricks. Use when creating data engineering jobs with notebooks, Python wheels, or SQL tasks. Invoke BEFORE starting implementation.
Databricks Lakebase Postgres: projects, scaling, connectivity, Lakebase synced tables, and Data API. Use when asked about Lakebase databases, OLTP storage, or connecting apps to Postgres on Databricks.
Manage Databricks Model Serving endpoints via CLI. Use when asked to create, configure, query, or manage model serving endpoints for LLM inference, custom models, or external models.
Develop Lakeflow Spark Declarative Pipelines (formerly Delta Live Tables) on Databricks. Use when building batch or streaming data pipelines with Python or SQL. Invoke BEFORE starting implementation.
Migrate Databricks workloads from classic compute to serverless compute. Scans code for serverless compatibility issues, provides concrete fixes for the serverless Spark Connect architecture, and guides the full migration to serverless environments. Use for classic-to-serverless migrations, serverless code compatibility checks, or writing new serverless-compatible notebooks and jobs. Not for classic DBR version upgrades or cluster configuration changes within classic compute.
Skills for AI coding assistants (Claude Code, Cursor, etc.) that provide Databricks-specific guidance.
For Claude Code:
databricks experimental aitools install
This installs skills to ~/.claude/skills/ for use with Claude Code.
For Cursor:
Run this command in chat:
/add-plugin databricks-skills
Each skill follows the Agent Skills Specification:
skill-name/
├── SKILL.md # Main skill file with frontmatter + instructions
└── references/ # Additional documentation loaded on demand
When experimenting with new skill variations, create a "subskill" that references the main skill and adds specific guidance:
---
name: "ai-databricks-apps"
description: "Databricks apps with AI features"
---
# AI powered Databricks Apps
First, load the base databricks-apps skill for foundational guidance.
Then apply these additional patterns:
- Custom pattern 1
- Custom pattern 2
This approach:
Sync assets and generate manifest after adding/updating skills:
python3 scripts/skills.py
Validate that assets and manifest are up to date (for CI):
python3 scripts/skills.py validate
The manifest is used by the CLI to discover available skills.
Please see SECURITY for vulnerability reporting guidelines.
All future release tags will be GPG-signed and verifiable via git tag -v <tag>.
Databricks development toolkit with skills for data engineering, ML, and AI agents plus MCP tools for direct Databricks operations
Share bugs, ideas, or general feedback.
Claude Code skill pack for Databricks (24 skills)
Editorial "Data Engineering" bundle for Claude Code from Antigravity Awesome Skills.
This plugin provides a specialized suite of skills for data engineers and database practitioners working on Google Cloud. It acts as an expert assistant, allowing you to use natural language prompts in your preferred coding agent to architect complex data pipelines, transform data with dbt, write Spark and BigQuery SQL notebooks, and orchestrate end-to-end workflows across GCP's data ecosystem.
Data engineering plugin - warehouse exploration, pipeline authoring, Airflow integration
Curated agent skills collection for dbt workflows, helping AI agents understand and execute data transformation pipelines more effectively.