From migration
Generates a dbt model from a stored procedure. Invoke when the user asks to "migrate a procedure", "generate a dbt model", "convert SP to dbt", or "create a model for <table>". Requires catalog profile and resolved statements from prior discover + profile stages.
npx claudepluginhub accelerate-data/migration-utilityThis skill uses the workspace's default tool permissions.
Generate a dbt model from a profiled stored procedure. Reads deterministic context from catalog, uses LLM to produce dbt-idiomatic SQL, validates logical equivalence, and writes artifacts to the dbt project.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Guides Payload CMS config (payload.config.ts), collections, fields, hooks, access control, APIs. Debugs validation errors, security, relationships, queries, transactions, hook behavior.
Generate a dbt model from a profiled stored procedure. Reads deterministic context from catalog, uses LLM to produce dbt-idiomatic SQL, validates logical equivalence, and writes artifacts to the dbt project.
Parse $ARGUMENTS for --table and --writer. Use AskUserQuestion if either is missing.
manifest.json from the current working directory to confirm a valid project root. If missing, stop and tell the user to run setup-ddl first.dbt_project.yml in ./dbt/ relative to the project root or ask the user for --dbt-project-path). If missing, tell the user to run /init-dbt first.uv run --project "${CLAUDE_PLUGIN_ROOT}/../lib" migrate context \
--table <table_fqn> --writer <writer_fqn>
Read the output JSON. It contains:
profile — classification, keys, watermark, PII answersmaterialization — derived from profile (snapshot/table/incremental)statements — resolved statement list with action (migrate/skip) and SQLproc_body — full original procedure SQLcolumns — target table column listsource_tables — tables read by the writerschema_tests — deterministic test specs (entity integrity, referential integrity, recency, PII)Analyse the statements array and proc_body to determine the model structure:
| Pattern | Model structure |
|---|---|
| Single INSERT from source tables | One staging model |
| Multiple INSERTs to the same target table | One model with UNION ALL |
| Multiple INSERTs to different target tables | Separate models (one per target) |
| Temp table chains (INSERT INTO #temp, then INSERT from #temp) | Staging + intermediate models with {{ ref() }} |
| Nested subqueries in SELECT | Flatten into sequential CTEs |
| MERGE with complex USING clause | Single model; MERGE semantics become incremental config |
Follow the import CTE -> logical CTE -> final CTE pattern:
All external data sources at the top of the model, each in its own CTE:
with source_customers as (
select * from {{ source('bronze', 'customer') }}
),
dim_product as (
select * from {{ ref('stg_dim_product') }}
),
Rules for import CTEs:
source_tables that are raw/bronze use {{ source('<schema>', '<table>') }}{{ ref('<model_name>') }}cte1 or temp)One transformation step per CTE, named descriptively:
customers_with_region as (
select
c.customer_key,
c.first_name,
g.country as region
from source_customers c
left join source_geography g
on c.customer_key = g.customer_key
),
filtered_customers as (
select *
from customers_with_region
where region is not null
),
Rules for logical CTEs:
step1/step2{{ var('param_name', 'default_value') }}final as (
select
customer_key,
first_name,
region,
current_timestamp() as _loaded_at
from filtered_customers
)
select * from final
Add {{ config() }} at the top of the model:
{{ config(
materialized='<materialization>'
) }}
For incremental models, add the watermark:
{{ config(
materialized='incremental',
unique_key='<pk_column>',
incremental_strategy='merge'
) }}
And add the incremental filter in the appropriate logical CTE:
{% if is_incremental() %}
where <watermark_column> > (select max(<watermark_column>) from {{ this }})
{% endif %}
For snapshot models, use the dbt snapshot block pattern instead of a model.
Compare the generated model against the original migrate statements. Check each of these:
| Check | What to compare |
|---|---|
| Source tables | Same tables read in generated model vs original proc? |
| Columns selected | Same columns in final SELECT vs original INSERT column list? |
| Join conditions | Same join keys and join types (INNER/LEFT/RIGHT/FULL)? |
| Filter predicates | Same WHERE/HAVING conditions (modulo syntax differences)? |
| Aggregation grain | Same GROUP BY columns? |
| Write semantics | INSERT/MERGE/UPDATE intent preserved by materialization? |
For each check:
ISNULL replaced with COALESCE): note as informationalIf warnings exist, present them before proceeding:
Equivalence check found 2 warnings:
1. [columns] Original INSERT includes column 'legacy_flag' not in generated model
2. [filter] Original WHERE clause has 'IsActive = 1' not present in generated model
Proceed with these differences? (y/n)
Use AskUserQuestion and wait. If the user says no, revise the model.
Render schema_tests from context into a dbt schema YAML:
version: 2
models:
- name: <model_name>
description: "Migrated from <writer_fqn>. Target: <table_fqn>."
columns:
- name: <pk_column>
description: "Primary key"
data_tests:
- unique
- not_null
- name: <fk_column>
description: "Foreign key to <ref_table>"
data_tests:
- relationships:
to: ref('<ref_model>')
field: <ref_column>
- name: <pii_column>
meta:
contains_pii: true
pii_action: <action>
Include recency test for incremental models if watermark is present.
Show the user:
Use AskUserQuestion: "Approve this model to write to the dbt project? (y/n/edit)"
If the user requests edits, apply them and re-run the equivalence check on the edited version.
After approval:
uv run --project "${CLAUDE_PLUGIN_ROOT}/../lib" migrate write \
--table <table_fqn> \
--model-sql '<generated_sql>' \
--schema-yml '<generated_yml>'
The dbt project path is resolved automatically from $DBT_PROJECT_PATH or defaults to ./dbt relative to the project root. Pass --dbt-project-path <path> only if you need to override this.
Report the written file paths to the user.
Run dbt compile to verify the generated model compiles:
cd "${DBT_PROJECT_PATH:-./dbt}" && dbt compile --select <model_name>
Report to the user:
dbt compile passed for <model_name>.
If compile fails with a non-connection error (syntax, bad ref, macro resolution):
migrate write, and re-run dbt compile (max 2 fix attempts)If compile fails with a connection error (adapter cannot reach the warehouse — look for "Could not connect", "Login failed", "Connection refused", or similar adapter errors in output):
dbt parse in the dbt project directorydbt parse validates YAML syntax, Jinja syntax, and ref/source graph integrity without a connection| Subcommand | Schema reference |
|---|---|
context | See docs/design/sp-to-dbt-plugin/README.md section "migrate context" |
write | { "written": [...], "status": "ok" } |
migrate context exits 1 if table has no profile or writer has no statements — tell user which prerequisite is missingmigrate context exits 2 on IO/parse errors — surface the error messagemigrate write exits 1 on validation failure (empty SQL) — tell user to regeneratemigrate write exits 2 on IO error (missing dbt project) — tell user to run /init-dbt