From wire
Proactive skill for migrating dbt projects between data platforms (BigQuery, Snowflake, Databricks) or upgrading between dbt versions. Auto-activates when working on warehouse migrations, platform switches, or dbt version upgrades. Provides systematic workflow with validation at each step.
npx claudepluginhub rittmananalytics/wire-plugin --plugin wireThis skill uses the workspace's default tool permissions.
This skill provides a systematic, validation-driven workflow for migrating dbt projects between data platforms (BigQuery, Snowflake, Databricks) or upgrading between dbt versions. It covers cross-platform SQL dialect translation, pre/post-migration testing, and documentation of all changes made.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
This skill provides a systematic, validation-driven workflow for migrating dbt projects between data platforms (BigQuery, Snowflake, Databricks) or upgrading between dbt versions. It covers cross-platform SQL dialect translation, pre/post-migration testing, and documentation of all changes made.
Migrations are high-risk operations. This skill enforces a disciplined approach: assess first, test before changing, fix one category at a time, validate continuously, and document everything.
This skill should activate when users:
Keywords to watch for:
Activate BEFORE making migration-related changes when:
profiles.yml target for a different platformtarget/ errors after a platform switchExample internal triggers:
SAFE_CAST but target is Snowflake -> Activate skillDATEADD but target is BigQuery -> Activate skilltype: bigquery to type: snowflake in profiles -> Activate skillDo not activate this skill when:
Moving a dbt project from one data warehouse to another. This is the most complex migration type.
Supported platform pairs (RA primary):
Key challenges:
Upgrading between dbt versions (e.g., 1.5 -> 1.8, Core -> Cloud).
Key challenges:
Converting stored procedures, views, or raw SQL scripts into dbt models.
Key challenges:
Follow these seven steps in order. Do not skip steps.
Inventory the existing project to understand scope and identify platform-specific code.
Actions:
packages.yml — check for platform-specific packagesmacros/ — catalogue which use platform-specific SQL.sql filesdbt_project.yml for platform-specific configurationsprofiles.yml for current target configurationPlatform-specific function search patterns:
| Source Platform | Functions to Find |
|---|---|
| BigQuery | SAFE_CAST, IFNULL, FARM_FINGERPRINT, GENERATE_DATE_ARRAY, UNNEST, STRUCT, ARRAY_AGG, FORMAT_DATE, DATE_DIFF, DATE_ADD, DATE_SUB, PARSE_DATE, TIMESTAMP_DIFF, REGEXP_EXTRACT, REGEXP_CONTAINS, SPLIT(x)[OFFSET(n)], backtick identifiers |
| Snowflake | DATEADD, DATEDIFF, NVL, NVL2, TRY_CAST, FLATTEN, LATERAL, PARSE_JSON, GET_PATH, OBJECT_CONSTRUCT, ARRAY_CONSTRUCT, IFF, REGEXP_SUBSTR, REGEXP_LIKE, SPLIT_PART, double-quote identifiers, $$ delimiters |
| Databricks | date_add, date_sub, datediff, nvl, explode, from_json, to_json, collect_list, collect_set, regexp_extract, split, backtick identifiers |
Output: A migration assessment document listing:
Generate unit tests on the SOURCE platform to capture expected outputs. These tests become the acceptance criteria for the migration.
Actions:
Why this matters: Without pre-migration tests, you cannot verify that the migrated code produces identical results. This step is non-negotiable.
Example unit test for migration:
unit_tests:
- name: test_date_formatting_migration
description: "Captures expected date formatting behavior for migration validation"
model: int_orders_enriched
given:
- input: ref('stg_orders')
rows:
- {order_id: 1, order_date: "2024-01-15", status: "completed"}
- {order_id: 2, order_date: "2024-02-28", status: "pending"}
expect:
rows:
- {order_id: 1, order_month: "2024-01", is_completed: true}
- {order_id: 2, order_month: "2024-02", is_completed: false}
Configure the new target platform.
Actions:
profiles.yml for the destination platformdbt debugdbt_project.yml if needed (e.g., dataset vs schema naming)packages.yml:
dbt-utils to a version supporting both platformsdbt-bigquery-utils)Profile configuration examples:
BigQuery target:
target_bigquery:
type: bigquery
method: oauth
project: my-gcp-project
dataset: analytics
threads: 4
location: EU
Snowflake target:
target_snowflake:
type: snowflake
account: xy12345.eu-west-1
user: "{{ env_var('SNOWFLAKE_USER') }}"
password: "{{ env_var('SNOWFLAKE_PASSWORD') }}"
role: transformer
database: analytics
warehouse: transforming
schema: public
threads: 4
Databricks target:
target_databricks:
type: databricks
host: "{{ env_var('DATABRICKS_HOST') }}"
http_path: /sql/1.0/warehouses/abc123
token: "{{ env_var('DATABRICKS_TOKEN') }}"
catalog: analytics
schema: public
threads: 4
Attempt to compile the project against the new target. Do NOT try to run models yet.
Actions:
target/ directory:
rm -rf target/
dbt compile --target target_new_platform
Important: Always clear target/ before recompiling after making changes. Stale compilation artifacts cause false errors.
Work through compilation errors by category, fixing one category at a time.
Fix order (most impactful first):
After each category:
target/migration_changes.mdCross-platform macro pattern: When a function exists on all platforms but with different names, create a dispatch macro:
-- macros/cross_platform/date_add_days.sql
{% macro date_add_days(date_expr, days) %}
{{ return(adapter.dispatch('date_add_days')(date_expr, days)) }}
{% endmacro %}
{% macro bigquery__date_add_days(date_expr, days) %}
DATE_ADD({{ date_expr }}, INTERVAL {{ days }} DAY)
{% endmacro %}
{% macro snowflake__date_add_days(date_expr, days) %}
DATEADD(day, {{ days }}, {{ date_expr }})
{% endmacro %}
{% macro databricks__date_add_days(date_expr, days) %}
date_add({{ date_expr }}, {{ days }})
{% endmacro %}
Run the full project on the new platform and compare results to pre-migration tests.
Actions:
dbt build --target target_new_platform
dbt test --target target_new_platform --select "test_type:unit"
dbt test --target target_new_platform
Success criteria:
Create a comprehensive record of all migration changes.
Create migration_changes.md in the project root (or .wire/{project}/ for Wire projects):
# Migration Changes: {Source Platform} -> {Target Platform}
## Summary
- **Date:** YYYY-MM-DD
- **Source:** {platform} / dbt {version}
- **Target:** {platform} / dbt {version}
- **Models migrated:** {count}
- **Tests migrated:** {count}
- **Macros changed:** {count}
## Changes by Category
### Data Types
| File | Line | Original | Migrated | Notes |
|------|------|----------|----------|-------|
### Date/Time Functions
| File | Line | Original | Migrated | Notes |
|------|------|----------|----------|-------|
### String Functions
| File | Line | Original | Migrated | Notes |
|------|------|----------|----------|-------|
(... continue for each category ...)
## Packages Changed
| Package | Old Version | New Version/Replacement | Notes |
|---------|------------|------------------------|-------|
## Known Differences
- List any acceptable differences in behavior between platforms
## Rollback Plan
- Steps to revert if migration needs to be rolled back
This section provides a quick reference for the most common translation needs. For a comprehensive reference, see dialect-differences.md in this skill's directory.
| Operation | BigQuery | Snowflake | Databricks |
|---|---|---|---|
| Add days | DATE_ADD(d, INTERVAL n DAY) | DATEADD(day, n, d) | date_add(d, n) |
| Subtract days | DATE_SUB(d, INTERVAL n DAY) | DATEADD(day, -n, d) | date_sub(d, n) |
| Difference | DATE_DIFF(d1, d2, DAY) | DATEDIFF(day, d2, d1) | datediff(d1, d2) |
| Truncate | DATE_TRUNC(d, MONTH) | DATE_TRUNC('month', d) | date_trunc('month', d) |
| Extract | EXTRACT(YEAR FROM d) | EXTRACT(YEAR FROM d) | extract(YEAR FROM d) |
| Format | FORMAT_DATE('%Y-%m', d) | TO_CHAR(d, 'YYYY-MM') | date_format(d, 'yyyy-MM') |
| Parse | PARSE_DATE('%Y-%m-%d', s) | TO_DATE(s, 'YYYY-MM-DD') | to_date(s, 'yyyy-MM-dd') |
| Current date | CURRENT_DATE() | CURRENT_DATE() | current_date() |
| Current timestamp | CURRENT_TIMESTAMP() | CURRENT_TIMESTAMP() | current_timestamp() |
| Operation | BigQuery | Snowflake | Databricks |
|---|---|---|---|
| Concatenate | CONCAT(a, b) or a || b | CONCAT(a, b) or a || b | concat(a, b) or a || b |
| Substring | SUBSTR(s, start, len) | SUBSTR(s, start, len) | substring(s, start, len) |
| Length | LENGTH(s) | LENGTH(s) | length(s) |
| Upper/Lower | UPPER(s) / LOWER(s) | UPPER(s) / LOWER(s) | upper(s) / lower(s) |
| Trim | TRIM(s) | TRIM(s) | trim(s) |
| Replace | REPLACE(s, old, new) | REPLACE(s, old, new) | replace(s, old, new) |
| Regex extract | REGEXP_EXTRACT(s, pattern) | REGEXP_SUBSTR(s, pattern) | regexp_extract(s, pattern, 0) |
| Regex match | REGEXP_CONTAINS(s, pattern) | REGEXP_LIKE(s, pattern) | s RLIKE pattern |
| Split | SPLIT(s, delim) | SPLIT(s, delim) | split(s, delim) |
| Split and index | SPLIT(s, delim)[OFFSET(n)] | SPLIT_PART(s, delim, n+1) | split(s, delim)[n] |
| Concept | BigQuery | Snowflake | Databricks |
|---|---|---|---|
| Integer | INT64 | INTEGER / NUMBER | BIGINT / INT |
| Float | FLOAT64 | FLOAT / DOUBLE | DOUBLE |
| Decimal | NUMERIC / BIGNUMERIC | NUMBER(p,s) | DECIMAL(p,s) |
| String | STRING | VARCHAR / STRING | STRING |
| Boolean | BOOL | BOOLEAN | BOOLEAN |
| Date | DATE | DATE | DATE |
| Timestamp | TIMESTAMP | TIMESTAMP_NTZ | TIMESTAMP |
| JSON | JSON (or STRING) | VARIANT | STRING (with JSON functions) |
| Array | ARRAY<T> | ARRAY | ARRAY<T> |
| Struct | STRUCT<fields> | OBJECT | STRUCT<fields> |
| Operation | BigQuery | Snowflake | Databricks |
|---|---|---|---|
| Coalesce | COALESCE(a, b) | COALESCE(a, b) | coalesce(a, b) |
| If null | IFNULL(a, b) | NVL(a, b) or IFNULL(a, b) | nvl(a, b) or ifnull(a, b) |
| Null if | NULLIF(a, b) | NULLIF(a, b) | nullif(a, b) |
| Safe cast | SAFE_CAST(x AS type) | TRY_CAST(x AS type) | try_cast(x AS type) |
| Safe divide | SAFE_DIVIDE(a, b) | DIV0NULL(a, b) or a / NULLIF(b, 0) | a / NULLIF(b, 0) |
| Platform | Style | Example |
|---|---|---|
| BigQuery | Backticks | `project.dataset.table` |
| Snowflake | Double quotes | "DATABASE"."SCHEMA"."TABLE" |
| Databricks | Backticks | `catalog`.`schema`.`table` |
Recommendation: Use dbt's {{ ref() }} and {{ source() }} functions wherever possible. They handle quoting automatically. Only worry about manual quoting for raw SQL in macros or ad-hoc queries.
packages.yml — ensure all packages support the target versionrequire-dbt-version in dbt_project.ymldbt deps to update packagesdbt compile and fix deprecation warningsdbt build and verify all models and tests passdbt 1.5 -> 1.6:
metrics: YAML schema changed from legacy Metrics Layer to MetricFlow formatdbt_metrics package deprecateddbt 1.6 -> 1.7:
dbt_utils.surrogate_key() renamed to dbt_utils.generate_surrogate_key()dbt 1.7 -> 1.8:
--empty flag for development buildsdbt 1.8 -> 1.9:
updated_at -> loaded_at)dbt retry command improveddbt 1.9 -> 1.10+:
dbt compile output| Old | New | Since |
|---|---|---|
adapter.dispatch('macro')() | adapter.dispatch('macro', 'package')() | 1.0 |
dbt_utils.surrogate_key() | dbt_utils.generate_surrogate_key() | 1.7 |
config(severity='warn') | config(severity='warn', warn_if='!=0') | 1.5 |
dbt_utils.pivot() | Community package / custom macro | 1.5+ |
| Old | New | Since |
|---|---|---|
materialized: table in schema.yml | config(materialized='table') in model or dbt_project.yml | - |
vars: {} in schema.yml | vars: {} in dbt_project.yml only | 1.0 |
database config for BigQuery | project config (with database as alias) | 1.0 |
For converting stored procedures, views, or raw SQL into dbt models.
Workflow:
sources.yml for all raw/landing tablesIF/ELSE -> CASE WHEN expressionsWHILE loops -> Set-based operations or incremental modelsCURSOR -> Window functions or CTEsThese are common mistakes during migrations. Avoid them.
| Anti-Pattern | Why It's Bad | What to Do Instead |
|---|---|---|
| Fixing SQL before understanding the error | You may introduce new bugs or mask the real issue | Read the full error message; classify it first; refer to dialect reference |
| Skipping pre-migration tests | No way to verify migration correctness | Always generate unit tests on the source platform first (Step 2) |
| Changing model architecture during migration | Two changes at once makes debugging impossible | Migration is a translation exercise; refactoring comes after |
| Not clearing target/ between compiles | Stale artifacts cause confusing errors | Always rm -rf target/ before recompiling |
| Fixing errors one at a time | Slow; errors may be related | Classify all errors first, fix by category |
| Using platform-specific SQL in new code | Creates future migration debt | Use dbt macros ({{ dbt.date_trunc() }}) or dispatch macros |
| Ignoring deprecation warnings | Warnings become errors in future versions | Fix all warnings during migration |
| Manual find-and-replace across all files | Misses context; breaks code in unexpected ways | Fix per-file, validate after each change |
In the Wire Framework, migrations are typically part of the Development phase:
/wire:status to monitor progressStore migration progress in .wire/{project}/migration_changes.md:
When Atlassian MCP is available:
RA's primary platforms in order of frequency:
Most RA migrations are into BigQuery from Snowflake or legacy systems. Optimize workflows accordingly.
| Mistake | Impact | Fix |
|---|---|---|
Using SAFE_CAST on Snowflake | Compilation error | Use TRY_CAST on Snowflake, or dbt's {{ safe_cast() }} macro |
Using DATEADD on BigQuery | Compilation error | Use DATE_ADD(d, INTERVAL n DAY) on BigQuery |
Forgetting to update packages.yml | Package functions fail on new platform | Audit all packages for platform compatibility |
| Not handling case sensitivity | Snowflake uppercases unquoted identifiers; BigQuery preserves case | Use consistent quoting or {{ ref() }} |
Assuming FLOAT precision is identical | Subtle data differences between platforms | Use NUMERIC/DECIMAL for financial data; accept small float differences |
Migrating MERGE statements without testing | MERGE behavior differs between platforms | Test incremental models with merge strategy on target platform |
| Ignoring timezone handling | Snowflake TIMESTAMP_NTZ vs BigQuery TIMESTAMP (UTC) | Explicitly set timezone in all timestamp operations |
| Hardcoded project/database names | Queries fail on new platform | Use {{ target.project }} / {{ target.database }} or {{ ref() }} |
When working with user-provided SQL, DDL, or schema files:
This skill covers cross-platform migrations (BigQuery, Snowflake, Databricks). For the dbt Core → dbt Fusion runtime upgrade, use the dbt-fusion skill instead — Fusion migration has a distinct error classification framework and uses dbt-autofix as a first step.
This skill is adapted from the migrating-dbt-project-across-platforms and migrating-dbt-core-to-fusion skills in the dbt-labs/dbt-agent-skills repository, modified for Rittman Analytics conventions, BigQuery-first development, multi-platform support without Fusion dependency, and integration with the Wire Framework delivery lifecycle.