Generate Databricks Delta Lake SQL from YAML configuration for ID unification
Generates Databricks Delta Lake SQL from YAML configuration for ID unification workflows.
/plugin marketplace add treasure-data/aps_claude_tools/plugin install treasure-data-cdp-hybrid-idu-plugins-cdp-hybrid-idu@treasure-data/aps_claude_toolsGenerate production-ready Databricks SQL workflow from your unify.yml configuration file. This command creates Delta Lake optimized SQL files with ACID transactions, clustering, and platform-specific function conversions.
unify.ymldatabricks_sql/)unify.yml exists and is validI'll call the databricks-sql-generator agent to:
yaml_unification_to_databricks.py Python scriptARRAY_SIZE → SIZEARRAY_CONSTRUCT → ARRAYOBJECT_CONSTRUCT → STRUCTCOLLECT_LIST for aggregationsFLATTEN for array operationsUNIX_TIMESTAMP() for time functionsGenerate complete SQL workflow in this structure:
databricks_sql/unify/
├── 01_create_graph.sql # Initialize graph with USING DELTA
├── 02_extract_merge.sql # Extract identities with validation
├── 03_source_key_stats.sql # Source statistics with GROUPING SETS
├── 04_unify_loop_iteration_*.sql # Loop iterations (auto-calculated count)
├── 05_canonicalize.sql # Canonical ID creation with key masks
├── 06_result_key_stats.sql # Result statistics with histograms
├── 10_enrich_*.sql # Enrich each source table
├── 20_master_*.sql # Master tables with attribute aggregation
├── 30_unification_metadata.sql # Metadata tables
├── 31_filter_lookup.sql # Validation rules lookup
└── 32_column_lookup.sql # Column mapping lookup
Provide:
/cdp-hybrid-idu:hybrid-generate-databricks
I'll prompt you for:
- YAML file path
- Target catalog
- Target schema
Provide all parameters upfront:
YAML file: /path/to/unify.yml
Target catalog: my_catalog
Target schema: my_schema
Source catalog: source_catalog (optional)
Source schema: source_schema (optional)
Output directory: custom_output/ (optional)
USING DELTA for all tablesCLUSTER BY (follower_id) on graph tablesDynamic Iteration Count: Auto-calculates based on:
Key-Specific Hashing: Each key uses unique cryptographic mask:
Key Type 1 (email): 0ffdbcf0c666ce190d
Key Type 2 (customer_id): 61a821f2b646a4e890
Key Type 3 (phone): acd2206c3f88b3ee27
Validation Rules:
valid_regexp: Regex pattern filteringinvalid_texts: NOT IN clause with NULL handlingMaster Table Attributes:
MAX_BY(attr, order) with COALESCESLICE(CONCAT(arrays), 1, N)The generator automatically converts:
unify.yml)name: customer_unification
keys:
- name: email
valid_regexp: ".*@.*"
invalid_texts: ['', 'N/A', 'null']
- name: customer_id
invalid_texts: ['', 'N/A']
tables:
- table: customer_profiles
key_columns:
- {column: email_std, key: email}
- {column: customer_id, key: customer_id}
canonical_ids:
- name: unified_id
merge_by_keys: [email, customer_id]
merge_iterations: 15
master_tables:
- name: customer_master
canonical_id: unified_id
attributes:
- name: best_email
source_columns:
- {table: customer_profiles, column: email_std, priority: 1}
databricks_sql/unify/
├── 01_create_graph.sql # Creates unified_id_graph_unify_loop_0
├── 02_extract_merge.sql # Merges customer_profiles keys
├── 03_source_key_stats.sql # Stats by table
├── 04_unify_loop_iteration_01.sql # First iteration
├── 04_unify_loop_iteration_02.sql # Second iteration
├── ... # Up to iteration_05
├── 05_canonicalize.sql # Creates unified_id_lookup
├── 06_result_key_stats.sql # Final statistics
├── 10_enrich_customer_profiles.sql # Adds unified_id column
├── 20_master_customer_master.sql # Creates customer_master table
├── 30_unification_metadata.sql # Metadata
├── 31_filter_lookup.sql # Validation rules
└── 32_column_lookup.sql # Column mappings
Use the execution command:
/cdp-hybrid-idu:hybrid-execute-databricks
The agent executes:
python3 scripts/databricks/yaml_unification_to_databricks.py \
unify.yml \
-tc my_catalog \
-ts my_schema \
-sc source_catalog \
-ss source_schema \
-o databricks_sql
01-09: Setup and initialization10-19: Source table enrichment20-29: Master table creation30-39: Metadata and lookup tables04_*_NN: Loop iterations (auto-numbered)Each loop iteration includes:
-- Check if graph changed
SELECT COUNT(*) FROM (
SELECT leader_ns, leader_id, follower_ns, follower_id
FROM iteration_N
EXCEPT
SELECT leader_ns, leader_id, follower_ns, follower_id
FROM iteration_N_minus_1
) diff
Stops when count = 0
Issue: YAML validation error Solution: Check YAML syntax, ensure proper indentation, verify all required fields
Issue: Table not found error Solution: Verify source catalog/schema, check table names in YAML
Issue: Python script error Solution: Ensure Python 3.7+ installed, check pyyaml dependency
Issue: Too many/few iterations
Solution: Adjust merge_iterations in canonical_ids section of YAML
Generated SQL will:
Ready to generate Databricks SQL from your YAML configuration?
Provide your YAML file path and target catalog/schema to begin!