Execute Databricks secondary workflow: MLflow model training and deployment. Use when building ML pipelines, training models, or deploying to production. Trigger with phrases like "databricks ML", "mlflow training", "databricks model", "feature store", "model registry".
From databricks-packnpx claudepluginhub nickloveinvesting/nick-love-plugins --plugin databricks-packThis skill is limited to using the following tools:
references/implementation-guide.mdGuides Payload CMS config (payload.config.ts), collections, fields, hooks, access control, APIs. Debugs validation errors, security, relationships, queries, transactions, hook behavior.
Designs, audits, and improves analytics tracking systems using Signal Quality Index for reliable, decision-ready data in marketing, product, and growth.
Enforces A/B test setup with gates for hypothesis locking, metrics definition, sample size calculation, assumptions checks, and execution readiness before implementation.
Build ML pipelines with MLflow experiment tracking, model registry, and deployment.
databricks-install-auth setupdatabricks-core-workflow-a (data pipelines)For full implementation details and code examples, load:
references/implementation-guide.md
| Error | Cause | Solution |
|---|---|---|
Model not found | Wrong model name/version | Verify in Model Registry |
Feature mismatch | Schema changed | Retrain with updated features |
Endpoint timeout | Cold start | Disable scale-to-zero for latency |
Memory error | Large batch | Reduce batch size or increase cluster |
For common errors, see databricks-common-errors.
Basic usage: Apply databricks core workflow b to a standard project setup with default configuration options.
Advanced scenario: Customize databricks core workflow b for production environments with multiple constraints and team-specific requirements.