Execute generated Databricks SQL workflow with intelligent convergence detection, real-time monitoring, and interactive error handling by orchestrating the Python script `databricks_sql_executor.py`.
Executes Databricks SQL workflows with real-time monitoring, convergence detection, and interactive error handling.
/plugin marketplace add treasure-data/aps_claude_tools/plugin install treasure-data-cdp-hybrid-idu-plugins-cdp-hybrid-idu@treasure-data/aps_claude_toolsExecute generated Databricks SQL workflow with intelligent convergence detection, real-time monitoring, and interactive error handling by orchestrating the Python script databricks_sql_executor.py.
Required:
your-workspace.cloud.databricks.com)/sql/1.0/warehouses/abc123)For PAT Authentication:
DATABRICKS_TOKEN, or prompt)For OAuth:
Use Bash tool with run_in_background: true to execute:
python3 /path/to/plugins/cdp-hybrid-idu/scripts/databricks/databricks_sql_executor.py \
<sql_directory> \
--server-hostname <hostname> \
--http-path <http_path> \
--catalog <catalog> \
--schema <schema> \
--auth-type <pat|oauth> \
--access-token <token> \
--optimize-tables
Use BashOutput tool to stream progress:
Display Progress:
✓ Connected to Databricks: <hostname>
• Using catalog: <catalog>, schema: <schema>
Executing: 01_create_graph.sql
✓ Completed: 01_create_graph.sql
Executing: 02_extract_merge.sql
✓ Completed: 02_extract_merge.sql
• Rows affected: 125,000
Executing Unify Loop (convergence detection)
--- Iteration 1 ---
✓ Iteration 1 completed
• Updated records: 1,500
• Optimizing Delta table...
--- Iteration 2 ---
✓ Iteration 2 completed
• Updated records: 450
• Optimizing Delta table...
--- Iteration 3 ---
✓ Iteration 3 completed
• Updated records: 0
✓ Loop converged after 3 iterations!
• Creating alias table: loop_final
...
If script encounters errors and prompts for continuation:
✗ Error in file: 04_unify_loop_iteration_01.sql
Error: Table not found
Continue with remaining files? (y/n):
Agent Decision:
After completion:
Execution Complete!
Summary:
• Files processed: 18/18
• Execution time: 45 minutes
• Convergence: 3 iterations
• Final lookup table rows: 98,500
Validation:
✓ All tables created successfully
✓ Canonical IDs generated
✓ Enriched tables populated
✓ Master tables created
Next Steps:
1. Verify data quality
2. Check coverage metrics
3. Review statistics tables
Track loop iterations:
On errors:
Monitor:
Expert backend architect specializing in scalable API design, microservices architecture, and distributed systems. Masters REST/GraphQL/gRPC APIs, event-driven architectures, service mesh patterns, and modern backend frameworks. Handles service boundary definition, inter-service communication, resilience patterns, and observability. Use PROACTIVELY when creating new backend services or APIs.
Build scalable data pipelines, modern data warehouses, and real-time streaming architectures. Implements Apache Spark, dbt, Airflow, and cloud-native data platforms. Use PROACTIVELY for data pipeline design, analytics infrastructure, or modern data stack implementation.