From wire
Proactive skill for validating dbt models against coding conventions. Auto-activates when creating, reviewing, or refactoring dbt models in staging, integration, or warehouse layers. Validates naming, SQL structure, field conventions, testing coverage, and documentation. Supports project-specific convention overrides and sqlfluff integration.
npx claudepluginhub rittmananalytics/wire-plugin --plugin wireThis skill uses the workspace's default tool permissions.
This skill automatically activates when working with dbt models to ensure adherence to coding conventions and best practices. It provides validation and recommendations for model structure, naming, SQL style, testing, and documentation.
conventions-reference.mdexamples/before-after-refactor.mdexamples/integration-model-example.sqlexamples/merge-sources-macro.sqlexamples/multi-source-dbt-project-example.ymlexamples/multi-source-dimension-example.sqlexamples/multi-source-fact-example.sqlexamples/multi-source-integration-example.sqlexamples/multi-source-staging-example.sqlexamples/schema-example.ymlexamples/staging-model-example.sqlexamples/validation-report-fail.mdexamples/validation-report-pass.mdexamples/warehouse-model-example.sqltesting-reference.mdGenerates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
This skill automatically activates when working with dbt models to ensure adherence to coding conventions and best practices. It provides validation and recommendations for model structure, naming, SQL style, testing, and documentation.
This skill should activate when users:
Keywords to watch for:
Activate BEFORE creating or modifying dbt SQL when:
Example internal triggers:
Priority Order (2-tier system):
Project-specific conventions (highest priority)
.dbt-conventions.md in project rootdbt_coding_conventions.md in project rootdocs/dbt_conventions.md in projectPKM user conventions (fallback)
/Users/olivierdupois/dev/PKM/4. 🛠️ Craft/Tools/dbt/dbt-conventions.md/Users/olivierdupois/dev/PKM/4. 🛠️ Craft/Tools/dbt/dbt-testing.mdNote: The skill's supporting files (conventions-reference.md, testing-reference.md, examples/) are embedded reference documentation that guide validation logic, not convention sources.
Detection:
Glob to search for convention files in project rootRead to load project conventionsWhen working with a dbt model, determine:
Model Type:
stg_): First transformation layer, selects from sourcesint_): Combines multiple sources, enriches entitiesint__<object>__<action>): Subcomponent of integration_dim): Mutable, noun-based entities_fct): Immutable, verb-based eventsContext Information:
How to identify:
ref() and source() callsCheck the following:
File and Model Naming:
user not users)stg_<source>__<object>.sql (e.g., stg_salesforce__user.sql)int__<object>.sql (e.g., int__user.sql)int__<object>__<action>.sql (e.g., int__user__unioned.sql)
<object>_dim.sql or <warehouse>_<object>_dim.sql
user_dim.sql)finance_revenue_dim.sql)<object>_fct.sql or <warehouse>_<object>_fct.sql
Directory Structure:
models/
├── staging/
│ └── <source_name>/
│ ├── stg_<source>.yml
│ └── stg_<source>__<object>.sql
├── integration/
│ ├── intermediate/
│ │ ├── intermediate.yml
│ │ └── int__<object>__<action>.sql
│ ├── int__<object>.sql
│ └── integration.yml
└── warehouse/
└── <warehouse_name>/
├── <warehouse>.yml
├── <object>_dim.sql
└── <object>_fct.sql
Violations to Flag:
Required Structure:
with
s_source_table as (
select * from {{ ref('source_model') }}
),
s_another_source as (
select * from {{ ref('another_model') }}
),
CTE Naming:
s_ for CTEs that select from refs/sourcesfiltered_events, aggregated_metrics)Final CTE Pattern:
final as (
select
-- fields here
from s_source_table
-- joins and where clauses
)
select * from final
{{
config(
materialized = 'table',
sort = 'id',
dist = 'id'
)
}}
Style Requirements:
as keyword for aliasesunion all to union distinctinner join, left join, never just join)customer, not c)Violations to Flag:
ref() or source() calls outside of top CTEsField Naming Conventions:
Primary Keys:
<object>_pk (e.g., user_pk, transaction_pk)dbt_utils.surrogate_key()Foreign Keys:
<referenced_object>_fk (e.g., user_fk, transaction_fk)dbt_utils.surrogate_key()Natural Keys:
<descriptive_name>_natural_keysalesforce_user_natural_key, stripe_customer_natural_keyTimestamps:
<event>_ts (e.g., created_ts, updated_ts, order_placed_ts)created_ts_ct, created_ts_ptBooleans:
is_ or has_ (e.g., is_active, has_subscription)Prices/Revenue:
price_in_centsCommon Fields:
customer_name, carrier_name, not just name)General Rules:
snake_caseField Ordering (Staging/Base Models):
Within each category, sort alphabetically.
Violations to Flag:
Configuration Rules:
Warehouse Models:
tableOther Layers:
view or ephemeral (CTE) materializationtable only if performance requires itConfiguration Placement:
dbt_project.ymlExample:
{{
config(
materialized = 'table',
sort = 'user_pk',
dist = 'user_pk'
)
}}
Violations to Flag:
Minimum Testing Requirements:
Every Model:
schema.yml fileunique and not_null testsdbt_utils.unique_combination_of_columnsSchema.yml Location:
.yml filestg_salesforce.yml, integration.yml)Example:
version: 2
models:
- name: stg_salesforce__user
description: Salesforce user records
columns:
- name: user_pk
description: Unique identifier for user
tests:
- unique
- not_null
- name: email
description: User email address
tests:
- not_null
Additional Tests:
relationships tests for foreign keysaccepted_values for enums/status fieldsnot_null_where for conditional requirementstests/ directory for KPI validationViolations to Flag:
Documentation Requirements:
Staging Models:
Warehouse Models:
Integration/Intermediate:
Doc Blocks:
{% docs %} blocks for shared documentationmodels/docs/ directoryExample:
version: 2
models:
- name: user_dim
description: |
User dimension containing customer profile information.
Updated nightly from Salesforce and Stripe sources.
columns:
- name: user_pk
description: "{{ doc('user_pk') }}"
Violations to Flag:
Check for sqlfluff:
which sqlfluff
If available:
.sqlfluff config in project rootsqlfluff lint <model_file> --dialect <dialect>If not available:
Structure your validation feedback as:
## dbt Model Validation Report
**Model:** `<model_name>.sql`
**Type:** <staging/integration/warehouse-dim/warehouse-fct>
**Convention Source:** <project-specific / RA defaults>
### Summary
- ✓ X checks passed
- ⚠️ Y issues found (N critical, M important, P nice-to-have)
### Naming Conventions
[✓/⚠️] **File naming:** <details>
[✓/⚠️] **Field naming:** <details>
### SQL Structure
[✓/⚠️] **CTE structure:** <details>
[✓/⚠️] **Style compliance:** <details>
[✓/⚠️] **Field ordering:** <details>
### Configuration
[✓/⚠️] **Materialization:** <details>
[✓/⚠️] **Performance settings:** <details>
### Testing
[✓/⚠️] **Schema.yml exists:** <details>
[✓/⚠️] **Primary key tests:** <details>
[✓/⚠️] **Foreign key tests:** <details>
### Documentation
[✓/⚠️] **Model description:** <details>
[✓/⚠️] **Column descriptions:** <details>
### sqlfluff
[✓/⚠️/N/A] **Linter results:** <details>
---
## Recommendations
### Critical Issues (must fix)
1. <issue description>
- **Location:** <file:line or section>
- **Current:** `<current code>`
- **Should be:** `<correct pattern>`
- **Reason:** <why this matters>
### Important Issues (should fix)
<same format>
### Nice-to-have Improvements
<same format>
---
## Examples
See `skills/dbt-development/examples/` for reference implementations:
- `staging-model-example.sql` - Compliant staging model
- `integration-model-example.sql` - Compliant integration model
- `warehouse-model-example.sql` - Compliant warehouse model
- `schema-example.yml` - Proper testing setup
This section describes the design pattern for building scalable, multi-source data warehouse frameworks. Use this pattern when integrating data from multiple source systems where the same entities (companies, contacts, products, locations) exist across sources with different IDs and attributes.
The multi-source framework uses a three-layer architecture:
Sources Layer (stg_*) → Integration Layer (int_*) → Warehouse Layer (wh_*)
| Layer | Purpose | Naming | Materialization |
|---|---|---|---|
| Sources | Source-specific transformations, column standardization, ID prefixing | stg_<source>__<object>.sql | view |
| Integration | Cross-source entity resolution, deduplication, merging | int__<object>.sql | view or table |
| Warehouse | Final dimensional models with surrogate keys | <object>_dim.sql, <object>_fct.sql | table |
A key feature is the ability to enable or disable data sources through dbt variables. This allows selective deployment, gradual rollout, and environment-specific configurations.
# dbt_project.yml
vars:
# Source enablement arrays - add/remove sources as needed
crm_warehouse_company_sources: ['hubspot_crm', 'xero_accounting', 'harvest_projects', 'stripe_payments']
crm_warehouse_contact_sources: ['hubspot_crm', 'mailchimp_email', 'harvest_projects', 'jira_projects']
finance_warehouse_invoice_sources: ['xero_accounting', 'harvest_projects']
projects_warehouse_delivery_sources: ['asana_projects', 'jira_projects']
# Per-source configuration
stg_hubspot_crm_id-prefix: 'hubspot-'
stg_hubspot_crm_etl: 'fivetran'
stg_hubspot_crm_schema: 'fivetran_hubspot'
stg_xero_accounting_id-prefix: 'xero-'
stg_xero_accounting_etl: 'fivetran'
stg_xero_accounting_schema: 'fivetran_xero'
stg_harvest_projects_id-prefix: 'harvest-'
stg_harvest_projects_etl: 'stitch'
stg_harvest_projects_schema: 'stitch_harvest'
# Feature flags
enable_companies_merge_file: true
enable_ip_geo_enrichment: false
Models check if their source is enabled before compiling:
-- models/sources/stg_hubspot_crm/stg_hubspot_crm__company.sql
-- Only compile if hubspot_crm is in the company sources list
{% if var("crm_warehouse_company_sources") %}
{% if 'hubspot_crm' in var("crm_warehouse_company_sources") %}
with source as (
select * from {{ source('hubspot_crm', 'companies') }}
),
renamed as (
select
-- Prefix ID with source identifier to prevent collisions
concat('{{ var("stg_hubspot_crm_id-prefix") }}', cast(companyid as string)) as company_id,
-- Standardize names for matching
trim(regexp_replace(
regexp_replace(properties_name, r'(?i)\s*(Limited|Ltd\.?|Inc\.?|LLC|Corp\.?)$', ''),
r'\s+', ' '
)) as company_name,
lower(properties_website) as company_website,
properties_industry as company_industry,
properties_phone as company_phone,
properties_createdate as company_created_ts,
properties_hs_lastmodifieddate as company_last_modified_ts
from source
)
select * from renamed
{% endif %}
{% else %} {{ config(enabled=false) }} {% endif %}
Support multiple ETL pipelines in the same model:
-- models/sources/stg_hubspot_crm/stg_hubspot_crm__company.sql
{% if var("crm_warehouse_company_sources") %}
{% if 'hubspot_crm' in var("crm_warehouse_company_sources") %}
{% if var("stg_hubspot_crm_etl") == 'stitch' %}
with source as (
select * from {{ source('stitch_hubspot_crm', 'companies') }}
),
-- Stitch-specific transformations...
{% elif var("stg_hubspot_crm_etl") == 'fivetran' %}
with source as (
select * from {{ source('fivetran_hubspot_crm', 'company') }}
),
-- Fivetran-specific transformations...
{% elif var("stg_hubspot_crm_etl") == 'airbyte' %}
with source as (
select * from {{ source('airbyte_hubspot_crm', 'companies') }}
),
-- Airbyte-specific transformations...
{% endif %}
renamed as (
-- Common transformation logic
)
select * from renamed
{% endif %}
{% else %} {{ config(enabled=false) }} {% endif %}
Each source view must prefix all IDs to prevent collisions:
-- Each source uses its own prefix
concat('{{ var("stg_hubspot_crm_id-prefix") }}', cast(id as string)) as company_id -- 'hubspot-12345'
concat('{{ var("stg_xero_accounting_id-prefix") }}', cast(id as string)) as company_id -- 'xero-67890'
concat('{{ var("stg_harvest_projects_id-prefix") }}', cast(id as string)) as company_id -- 'harvest-abc123'
Use the merge_sources macro to dynamically union enabled sources:
-- macros/merge_sources.sql
{% macro merge_sources(sources, model_suffix) %}
(
{% set relations_list = [] %}
{% for source in sources %}
{% do relations_list.append(ref("stg_" ~ source ~ model_suffix)) %}
{% endfor %}
{{ dbt_utils.union_relations(relations=relations_list) }}
)
{% endmacro %}
-- models/integration/int__company_pre_merged.sql
{% if var('crm_warehouse_company_sources') %}
with companies_pre_merged as (
{{ merge_sources(sources=var('crm_warehouse_company_sources'), model_suffix='__company') }}
),
-- Collect all source IDs into arrays grouped by company name
all_company_ids as (
select
company_name,
array_agg(distinct company_id ignore nulls) as all_company_ids
from companies_pre_merged
where company_name is not null and trim(company_name) != ''
group by 1
),
-- Deduplicate attributes by taking best/max values
companies_grouped as (
select
company_name,
max(company_website) as company_website,
max(company_industry) as company_industry,
max(company_phone) as company_phone,
min(company_created_ts) as company_created_ts,
max(company_last_modified_ts) as company_last_modified_ts,
count(distinct _dbt_source_relation) as source_count
from companies_pre_merged
where company_name is not null and trim(company_name) != ''
group by 1
),
final as (
select
g.*,
a.all_company_ids
from companies_grouped g
join all_company_ids a on g.company_name = a.company_name
)
select * from final
{% else %} {{ config(enabled=false) }} {% endif %}
For complex merges where name matching isn't sufficient, use a seed file:
# data/companies_merge_list.csv
company_id,old_company_id
hubspot-12345,xero-67890
hubspot-12345,harvest-abc123
hubspot-99999,xero-88888
-- models/integration/int__company.sql
{% if var('crm_warehouse_company_sources') %}
with companies_pre_merged as (
select * from {{ ref('int__company_pre_merged') }}
),
{% if var('enable_companies_merge_file', false) %}
-- Apply manual merge mappings
merge_list as (
select * from {{ ref('companies_merge_list') }}
),
-- Identify companies to be merged
merged_ids as (
select
c2.company_name,
array_concat_agg(
case
when c1.company_name is not null then c1.all_company_ids
else c2.all_company_ids
end
) as all_company_ids
from companies_pre_merged c2
left join merge_list m on m.company_id in unnest(c2.all_company_ids)
left join companies_pre_merged c1 on m.old_company_id in unnest(c1.all_company_ids)
group by 1
),
-- Exclude companies that were merged INTO another company
excluded_companies as (
select distinct c1.company_name
from merge_list m
join companies_pre_merged c1 on m.old_company_id in unnest(c1.all_company_ids)
),
final as (
select
c.company_name,
c.company_website,
c.company_industry,
c.company_phone,
c.company_created_ts,
c.company_last_modified_ts,
c.source_count,
coalesce(m.all_company_ids, c.all_company_ids) as all_company_ids
from companies_pre_merged c
left join merged_ids m on c.company_name = m.company_name
where c.company_name not in (select company_name from excluded_companies)
)
{% else %}
-- No merge file, use pre-merged directly
final as (
select * from companies_pre_merged
)
{% endif %}
select * from final
{% else %} {{ config(enabled=false) }} {% endif %}
-- models/warehouse/core/company_dim.sql
{% if var("crm_warehouse_company_sources") %}
{{
config(
materialized='table',
unique_key='company_pk'
)
}}
with companies as (
select * from {{ ref('int__company') }}
),
final as (
select
{{ dbt_utils.generate_surrogate_key(['company_name']) }} as company_pk,
company_name,
company_website,
company_industry,
company_phone,
company_created_ts,
company_last_modified_ts,
source_count,
all_company_ids
from companies
)
select * from final
{% else %} {{ config(enabled=false) }} {% endif %}
Join fact tables to dimensions using the array of source IDs:
-- models/warehouse/finance/invoice_fct.sql
{% if var("finance_warehouse_invoice_sources") %}
{{
config(
materialized='table',
unique_key='invoice_pk'
)
}}
with invoices as (
select * from {{ ref('int__invoice') }}
),
companies_dim as (
select * from {{ ref('company_dim') }}
),
final as (
select
{{ dbt_utils.generate_surrogate_key(['i.invoice_number']) }} as invoice_pk,
c.company_pk as company_fk,
-- Invoice attributes
i.invoice_number,
i.invoice_amount,
i.invoice_currency,
i.invoice_status,
i.invoice_created_ts,
i.invoice_due_ts,
i.invoice_paid_ts,
-- Calculated fields
row_number() over (partition by c.company_pk order by i.invoice_created_ts) as invoice_seq,
datediff('day', i.invoice_created_ts, i.invoice_paid_ts) as days_to_pay
from invoices i
-- JOIN using UNNEST to match any source system ID
left join companies_dim c
on i.company_id in unnest(c.all_company_ids)
)
select * from final
{% else %} {{ config(enabled=false) }} {% endif %}
When implementing the multi-source pattern:
Configuration
dbt_project.yml for each entity typeSource Layer
Integration Layer
merge_sources macro used for dynamic unionsWarehouse Layer
IN UNNEST() patternmodels/
├── sources/
│ ├── stg_hubspot_crm/
│ │ ├── _hubspot_crm__sources.yml
│ │ ├── stg_hubspot_crm__company.sql
│ │ ├── stg_hubspot_crm__contact.sql
│ │ └── stg_hubspot_crm__deal.sql
│ ├── stg_xero_accounting/
│ │ ├── _xero_accounting__sources.yml
│ │ ├── stg_xero_accounting__company.sql
│ │ ├── stg_xero_accounting__contact.sql
│ │ └── stg_xero_accounting__invoice.sql
│ └── stg_harvest_projects/
│ ├── _harvest_projects__sources.yml
│ ├── stg_harvest_projects__company.sql
│ ├── stg_harvest_projects__contact.sql
│ └── stg_harvest_projects__invoice.sql
├── integration/
│ ├── _integration__schema.yml
│ ├── int__company_pre_merged.sql
│ ├── int__company.sql
│ ├── int__contact_pre_merged.sql
│ ├── int__contact.sql
│ └── int__invoice.sql
└── warehouse/
├── core/
│ ├── _core__schema.yml
│ ├── company_dim.sql
│ └── contact_dim.sql
└── finance/
├── _finance__schema.yml
├── invoice_fct.sql
└── payment_fct.sql
macros/
└── merge_sources.sql
data/
├── companies_merge_list.csv
└── contacts_merge_list.csv
To add a new data source:
dbt_project.yml:vars:
crm_warehouse_company_sources: ['hubspot_crm', 'xero_accounting', 'NEW_SOURCE']
stg_new_source_id-prefix: 'newsource-'
stg_new_source_etl: 'fivetran'
stg_new_source_schema: 'fivetran_new_source'
_new_source__sources.yml):version: 2
sources:
- name: new_source
database: "{{ var('stg_new_source_database', target.database) }}"
schema: "{{ var('stg_new_source_schema') }}"
tables:
- name: companies
- name: contacts
{% if var("crm_warehouse_company_sources") %}
{% if 'new_source' in var("crm_warehouse_company_sources") %}
-- Model SQL here
{% endif %}
{% else %} {{ config(enabled=false) }} {% endif %}
dbt_project.yml:vars:
# 'harvest_projects' removed from list
crm_warehouse_company_sources: ['hubspot_crm', 'xero_accounting']
The source models will automatically disable due to conditional compilation
Historical source IDs remain in dimension arrays for audit purposes
| Issue | Cause | Solution |
|---|---|---|
| Model not compiling | Source not in enablement array | Add source to *_sources variable |
| Duplicate dimension records | Inconsistent name normalization | Ensure identical cleaning logic in all sources |
| Missing fact-dimension joins | Source ID not in array | Verify ID prefix is consistent |
| Orphaned fact records | Company exists in fact but not dim source | Add source to dimension source list |
| Array contains duplicates | Missing DISTINCT in ARRAY_AGG | Add distinct keyword |
| Wrong ETL source used | ETL variable incorrect | Check stg_*_etl variable value |
When creating a new dbt model from scratch:
Step-by-step Process:
Determine Model Type
Generate File Structure
Build SQL Structure
Apply Field Conventions
Create/Update schema.yml
Validate Against Conventions
In This Skill Directory:
conventions-reference.md - Quick reference for naming, style, structuretesting-reference.md - Test requirements and transformation layersexamples/staging-model-example.sql - Staging model templateexamples/integration-model-example.sql - Integration model templateexamples/warehouse-model-example.sql - Warehouse model templateexamples/schema-example.yml - Testing and documentation exampleexamples/multi-source-staging-example.sql - Multi-source staging model with conditional compilationexamples/multi-source-integration-example.sql - Entity deduplication and mergingexamples/multi-source-dimension-example.sql - Dimension with source ID arraysexamples/multi-source-fact-example.sql - Fact table with UNNEST joinsexamples/merge-sources-macro.sql - Dynamic source union macroexamples/multi-source-dbt-project-example.yml - Configuration-driven source managementConvention Sources (2-tier system):
.dbt-conventions.md (if exists in project)/Users/olivierdupois/dev/PKM/4. 🛠️ Craft/Tools/dbt/dbt-conventions.md and dbt-testing.mdAlways Validate When:
Validation Mode (Not Auto-fix):
Project Awareness:
Priority Levels:
Example 1: Creating a Staging Model
User: "Create a staging model for Hubspot contacts"
Actions:
1. Activate dbt Development skill
2. Load convention source (project or RA defaults)
3. Determine: staging model, Hubspot source, contact object
4. Generate: stg_hubspot__contact.sql with proper structure
5. Create schema.yml entry with tests
6. Validate against all conventions
7. Present model for review
Example 2: Reviewing Existing Model
User: "Review this dbt model" [provides file]
Actions:
1. Activate dbt Development skill
2. Load convention source
3. Identify model type from filename/content
4. Run through validation checklist (naming, structure, fields, tests, docs)
5. Check sqlfluff if available
6. Generate validation report with recommendations
Example 3: Refactoring
User: "This integration model needs refactoring to match conventions"
Actions:
1. Activate dbt Development skill
2. Load conventions
3. Analyze current model structure
4. Identify violations
5. Provide detailed refactoring plan with before/after examples
6. Offer to apply changes section by section with user approval
Example 4: Multi-Source Entity Resolution
User: "I need to create a company dimension that combines data from HubSpot, Xero, and Harvest"
Actions:
1. Activate dbt Development skill
2. Load conventions
3. Identify this as a multi-source entity resolution task
4. Review dbt_project.yml for existing source configuration
5. Create/update source arrays in vars section
6. Generate staging models for each source with:
- Conditional compilation checks
- ID prefixing using source-specific prefix
- Standardized column names
7. Create integration models:
- int__company_pre_merged.sql using merge_sources macro
- int__company.sql with optional merge list support
8. Create warehouse model:
- company_dim.sql with surrogate key
- Preserved all_company_ids array
9. Create merge_sources macro if not exists
10. Create schema.yml with appropriate tests
11. Validate all models against conventions
Example 5: Adding a New Source
User: "We just connected Stripe and need to add it to our company dimension"
Actions:
1. Activate dbt Development skill
2. Load conventions
3. Review existing source configuration in dbt_project.yml
4. Add new source configuration:
- Add 'stripe_payments' to crm_warehouse_company_sources array
- Add stg_stripe_payments_id-prefix variable
- Add stg_stripe_payments_schema variable
5. Create source definition file
6. Create staging model stg_stripe_payments__company.sql with:
- Conditional compilation check
- ID prefixing
- Standardized columns matching other company sources
7. Integration models will automatically include via merge_sources macro
8. Validate new model against conventions
9. Recommend testing new source independently before production
When running dbt commands:
--quiet flag for cleaner output — reduces noise from dbt run and dbt builddbt list --select <selector> before running dbt build or dbt run — avoids accidentally running more models than intendeddbt show --limit N instead of writing SELECT ... LIMIT N queries — lets you preview model output without running the full modeltarget/run_results.json after runs to get per-model timing, status, and row counts — useful for performance analysis and debuggingdbt build over separate dbt run + dbt test — build runs models and their tests together in dependency order, catching failures earlier--warn-error-options to promote specific warnings to errors — prevents silent issues from accumulatingdbtf): if the project uses the Fusion runtime, invoke with dbtf or ~/.local/bin/dbt (not the venv dbt). Fusion is faster and has stricter SQL parsing — see the dbt-fusion skill for migration guidance..md to any docs.getdbt.com URL to get clean markdown (e.g. https://docs.getdbt.com/reference/commands/run.md). Use https://docs.getdbt.com/llms.txt to find available pages, or https://docs.getdbt.com/llms-full.txt for full-text search.Do NOT activate this skill when: