Generate production-ready Snowflake SQL from `unify.yml` configuration by executing the Python script `yaml_unification_to_snowflake.py`.
Generates production-ready Snowflake SQL from unify.yml configuration by executing the yaml_unification_to_snowflake.py script.
/plugin marketplace add treasure-data/aps_claude_tools/plugin install treasure-data-cdp-hybrid-idu-plugins-cdp-hybrid-idu@treasure-data/aps_claude_toolsGenerate production-ready Snowflake SQL from unify.yml configuration by executing the Python script yaml_unification_to_snowflake.py.
Check:
Use Bash tool to execute:
python3 /path/to/plugins/cdp-hybrid-idu/scripts/snowflake/yaml_unification_to_snowflake.py \
<yaml_file> \
-d <target_database> \
-s <target_schema> \
-sd <source_database> \
-ss <source_schema> \
-o <output_directory>
Parameters:
<yaml_file>: Path to unify.yml-d: Target database name-s: Target schema name-sd: Source database (optional, defaults to target database)-ss: Source schema (optional, defaults to PUBLIC)-o: Output directory (optional, defaults to snowflake_sql)Track:
Output:
✓ Snowflake SQL generation complete!
Generated Files:
• snowflake_sql/unify/01_create_graph.sql
• snowflake_sql/unify/02_extract_merge.sql
• snowflake_sql/unify/03_source_key_stats.sql
• snowflake_sql/unify/04_unify_loop_iteration_01.sql
... (up to iteration_N)
• snowflake_sql/unify/05_canonicalize.sql
• snowflake_sql/unify/06_result_key_stats.sql
• snowflake_sql/unify/10_enrich_*.sql
• snowflake_sql/unify/20_master_*.sql
• snowflake_sql/unify/30_unification_metadata.sql
• snowflake_sql/unify/31_filter_lookup.sql
• snowflake_sql/unify/32_column_lookup.sql
Total: X SQL files
Configuration:
• Database: <database_name>
• Schema: <schema_name>
• Iterations: N (calculated from YAML)
• Tables: X enriched, Y master tables
Snowflake Features Enabled:
✓ Native Snowflake functions
✓ VARIANT support
✓ Table clustering
✓ Convergence detection
Next Steps:
1. Review generated SQL files
2. Execute using: /cdp-hybrid-idu:hybrid-execute-snowflake
3. Or manually execute in Snowflake SQL worksheet
If script fails:
Verify:
Report applied conversions:
Expert backend architect specializing in scalable API design, microservices architecture, and distributed systems. Masters REST/GraphQL/gRPC APIs, event-driven architectures, service mesh patterns, and modern backend frameworks. Handles service boundary definition, inter-service communication, resilience patterns, and observability. Use PROACTIVELY when creating new backend services or APIs.
Build scalable data pipelines, modern data warehouses, and real-time streaming architectures. Implements Apache Spark, dbt, Airflow, and cloud-native data platforms. Use PROACTIVELY for data pipeline design, analytics infrastructure, or modern data stack implementation.
Expert database architect specializing in data layer design from scratch, technology selection, schema modeling, and scalable database architectures. Masters SQL/NoSQL/TimeSeries database selection, normalization strategies, migration planning, and performance-first design. Handles both greenfield architectures and re-architecture of existing systems. Use PROACTIVELY for database architecture, technology selection, or data modeling decisions.