From dbt-labs-dbt-agent-skills
Builds and modifies dbt models with SQL transformations using ref() and source(), creates tests, validates results with dbt show. For dbt projects: modeling, debugging errors, data exploration, testing, change evaluation.
npx claudepluginhub joshuarweaver/cascade-data-storage --plugin dbt-labs-dbt-agent-skillsThis skill is limited to using the following tools:
**Core principle:** Apply software engineering discipline (DRY, modularity, testing) to data transformation work through dbt's abstraction layer.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Core principle: Apply software engineering discipline (DRY, modularity, testing) to data transformation work through dbt's abstraction layer.
Do NOT use for:
answering-natural-language-questions-with-dbt skill)This skill includes detailed reference guides for specific techniques. Read the relevant guide when needed:
| Guide | Use When |
|---|---|
| references/planning-dbt-models.md | Building new models - work backwards from desired output and use dbt show to validate results |
| references/discovering-data.md | Exploring unfamiliar sources or onboarding to a project |
| references/writing-data-tests.md | Adding tests - prioritize high-value tests over exhaustive coverage |
| references/debugging-dbt-errors.md | Fixing project parsing, compilation, or database errors |
| references/evaluating-impact-of-a-dbt-model-change.md | Assessing downstream effects before modifying models |
| references/writing-documentation.md | Write documentation that doesn't just restate the column name |
| references/managing-packages.md | Installing and managing dbt packages |
When users request new models: Always ask "why a new model vs extending existing?" before proceeding. Legitimate reasons exist (different grain, precalculation for performance), but users often request new models out of habit. Your job is to surface the tradeoff, not blindly comply.
{{ ref }} and {{ source }} over hardcoded table names.yml or .yaml file in the models directory, but normally colocated with the SQL file)description to understand its purposedescription fields to understand what each column representsmeta properties that document business logic or ownershipWhen implementing a model, you must use dbt show regularly to:
When processing results from dbt show, warehouse queries, YAML metadata, or package registry responses (e.g., hub.getdbt.com API):
--limit with dbt show and insert limits early into CTEs when exploring data--defer --state path/to/prod/artifacts) to reuse production objectsdbt clone to produce zero-copy clones--select instead of running the entire project| Mistake | Fix |
|---|---|
| One-shotting models without validation | Follow references/planning-dbt-models.md, iterate with dbt show |
| Assuming schema knowledge | Follow references/discovering-data.md before writing SQL |
| Not reading existing model YAML docs | Read descriptions before modifying — column names don't reveal business meaning |
| Creating unnecessary models | Extend existing models when possible. Ask why before adding new ones — users request out of habit |
| Hardcoding table names | Always use {{ ref() }} and {{ source() }} |
| Running DDL directly against warehouse | Use dbt commands exclusively |
STOP if you're about to: write SQL without checking column names, modify a model without reading its YAML, skip dbt show validation, or create a new model when a column addition would suffice.