From databricks-skills
Manage Databricks Model Serving endpoints via CLI. Use when asked to create, configure, query, or manage model serving endpoints for LLM inference, custom models, or external models.
npx claudepluginhub databricks/databricks-agent-skillsThis skill uses the workspace's default tool permissions.
**FIRST**: Use the parent `databricks-core` skill for CLI basics, authentication, and profile selection.
Deploys MLflow models, custom pyfunc, and GenAI agents to Databricks Model Serving endpoints. Queries endpoints, checks status, integrates UC Functions and Vector Search.
Executes Databricks ML workflow: Feature Store engineering, MLflow training/tracking, Unity Catalog registry, Mosaic AI serving for production inference.
Deploys SageMaker endpoints for ML models with step-by-step guidance, production code/configs, best practices for serving, MLOps pipelines, monitoring, and optimization.
Share bugs, ideas, or general feedback.
FIRST: Use the parent databricks-core skill for CLI basics, authentication, and profile selection.
Model Serving provides managed endpoints for serving LLMs, custom ML models, and external models as scalable REST APIs. Endpoints are identified by name (unique per workspace).
| Type | When to Use | Key Detail |
|---|---|---|
| Pay-per-token | Foundation Model APIs (Llama, DBRX, etc.) | Uses system.ai.* catalog models, simplest setup |
| Provisioned throughput | Dedicated GPU capacity | Guaranteed throughput, higher cost |
| Custom model | Your own MLflow models or containers | Deploy any model with an MLflow signature |
Serving Endpoint (top-level, identified by NAME)
├── Config
│ ├── Served Entities (model references + scaling config)
│ └── Traffic Config (routing percentages across entities)
├── AI Gateway (rate limits, usage tracking)
└── State (READY / NOT_READY, config_update status)
served_entities[].name in the get output — needed for build-logs and logs commands.NOT_READY → READY after creation or config update. Poll via get to check state.ready.Do NOT guess command syntax. Discover available commands and their usage dynamically:
# List all serving-endpoints subcommands
databricks serving-endpoints -h
# Get detailed usage for any subcommand (flags, args, JSON fields)
databricks serving-endpoints <subcommand> -h
Run databricks serving-endpoints -h before constructing any command. Run databricks serving-endpoints <subcommand> -h to discover exact flags, positional arguments, and JSON spec fields for that subcommand.
Do NOT list endpoints before creating.
databricks serving-endpoints create <ENDPOINT_NAME> \
--json '{
"served_entities": [{
"entity_name": "<MODEL_CATALOG_PATH>",
"entity_version": "<VERSION>",
"min_provisioned_throughput": 0,
"max_provisioned_throughput": 0,
"workload_size": "Small"
}],
"traffic_config": {
"routes": [{
"served_entity_name": "<ENTITY_NAME>",
"traffic_percentage": 100
}]
}
}' --profile <PROFILE>
system.ai catalog in Unity Catalog.--no-wait to return immediately, then poll:
databricks serving-endpoints get <ENDPOINT_NAME> --profile <PROFILE>
# Check: state.ready == "READY"
databricks serving-endpoints create -h to discover the required JSON fields for your endpoint type.databricks serving-endpoints query <ENDPOINT_NAME> \
--json '{"messages": [{"role": "user", "content": "Hello, how are you?"}]}' \
--profile <PROFILE>
--stream for streaming responses.get-open-api <ENDPOINT_NAME> first to discover the request/response schema, then construct the appropriate JSON payload.Returns the OpenAPI 3.1 JSON schema describing what each served model accepts and returns. Use this to understand an endpoint's input/output format before querying it.
databricks serving-endpoints get-open-api <ENDPOINT_NAME> --profile <PROFILE>
The schema shows paths per served model (e.g., /served-models/<model-name>/invocations) with full request/response definitions including parameter types, enums, and nullable fields.
Run databricks serving-endpoints <subcommand> -h for usage details.
| Task | Command | Notes |
|---|---|---|
| List all endpoints | list | |
| Get endpoint details | get <NAME> | Shows state, config, served entities |
| Delete endpoint | delete <NAME> | |
| Update served entities or traffic | update-config <NAME> --json '...' | Zero-downtime: old config serves until new is ready |
| Rate limits & usage tracking | put-ai-gateway <NAME> --json '...' | |
| Update tags | patch <NAME> --json '...' | |
| Build logs | build-logs <NAME> <SERVED_MODEL> | Get SERVED_MODEL from get output: served_entities[].name |
| Runtime logs | logs <NAME> <SERVED_MODEL> | |
| Metrics (Prometheus format) | export-metrics <NAME> | |
| Permissions | get-permissions <ENDPOINT_ID> | ⚠️ Uses endpoint ID (hex string), not name. Find ID via get. |
After creating a serving endpoint, wire it into a Databricks App.
Step 1 — Check if the serving plugin is available in the AppKit template:
databricks apps manifest --profile <PROFILE>
If the output includes a serving plugin, scaffold with:
databricks apps init --name <APP_NAME> \
--features serving \
--set "serving.serving-endpoint.name=<ENDPOINT_NAME>" \
--run none --profile <PROFILE>
Step 2 — If no serving plugin, add the endpoint resource manually to an existing app's databricks.yml:
resources:
apps:
my_app:
resources:
- name: my-model-endpoint
serving_endpoint:
name: <ENDPOINT_NAME>
permission: CAN_QUERY
And inject the endpoint name as an environment variable in app.yaml:
env:
- name: SERVING_ENDPOINT
valueFrom: serving-endpoint
Then add a tRPC route to call it from your app. For the full app integration pattern, use the databricks-apps skill and read the Model Serving Guide.
| Error | Solution |
|---|---|
cannot configure default credentials | Use --profile flag or authenticate first |
PERMISSION_DENIED | Check workspace permissions; for apps, ensure serving_endpoint resource declared with CAN_QUERY |
Endpoint stuck in NOT_READY | Check build-logs for the served model (get entity name from get output) |
RESOURCE_DOES_NOT_EXIST | Verify endpoint name with list |
| Query returns 404 | Endpoint may still be provisioning; check state.ready via get |
RATE_LIMIT_EXCEEDED (429) | AI Gateway rate limit; check put-ai-gateway config or retry after backoff |