Plugins listed here are tagged for this technology stack and auto-indexed from public GitHub repositories.
Plugins listed here are tagged for this technology stack and auto-indexed from public GitHub repositories.
Claude Code plugins tagged for dbt development. Browse commands, agents, skills, and more.
Build production-ready data engineering stacks: Airflow DAGs for orchestration, dbt models for transformations, scalable pipelines with Spark on cloud warehouses like BigQuery and Snowflake, Kafka streaming, optimized embeddings for RAG, and vector databases like Pinecone, Weaviate, and pgvector.
Delegate complex data engineering, ML, and AI workflows to specialized sub-agents that design scalable pipelines, build and optimize models, architect LLM systems, tune databases for performance, and deploy production infrastructure across clouds.
Develop full-stack Databricks solutions: create Spark/Lakeflow pipelines, MLflow models and agents, Vector Search indexes, AI/BI dashboards, Genie Spaces, Unity Catalog metrics, Lakebase PostgreSQL, jobs/workflows, apps with Streamlit/FastAPI, and deploy via bundles/CI-CD using Python SDK skills and direct MCP workspace access.
Generate and update project documentation via slash commands: create architecture docs with C4/Mermaid, onboarding/migration/troubleshooting guides, dbt model YAML, Keep a Changelog entries; analyze git changes/GitHub issues for explanations and README updates.
Build and test dbt models using SQL transformations, ref/source, and YAML unit tests; configure semantic layers for metrics, dimensions, and KPI queries; troubleshoot Cloud jobs with logs, API, and git; implement Mesh governance for contracts and cross-project refs; access docs; format CLI commands; generate MCP configs for VS Code integration.
Generate Mermaid diagrams visualizing dbt model lineage and dependencies as color-coded DAGs in markdown with legends. Input manifest.json, use MCP tools, or parse code directly to quickly diagram data pipelines and model relationships for documentation and analysis.
Streamline Airflow data engineering workflows using Astro CLI: initialize and manage local/production environments, author/debug/deploy DAGs, profile warehouse schemas with lineage tracing, integrate dbt Cosmos, query tables, and migrate to Airflow 3.x.
Accelerate dbt model workflows in Snowflake by creating, debugging, refactoring, testing, documenting, and migrating SQL to modular models with built-in validation. Identify expensive queries from history, profile them by ID, and rewrite SQL to fix performance bottlenecks like spillage and poor pruning.
Engineer robust ETL pipelines: clean messy CSVs/Parquet, infer schemas, profile datasets, detect anomalies, validate quality with Pydantic/Pandera/Great Expectations, implement incremental patterns, generate dbt models/SQL migrations/tests, and orchestrate autonomous backfills/pipeline testing via agents and CLI commands.
Build and orchestrate end-to-end GCP data pipelines using natural language — generate Dataform/dbt code, run Spark and BigQuery SQL notebooks, provision infrastructure, and troubleshoot Cloud Composer workflows, all from your coding agent.
Initialize drt Reverse ETL projects with data warehouses like BigQuery, DuckDB, or Postgres; generate YAML sync configs to destinations such as Slack, Discord, Jira, REST APIs; debug failing syncs from auth errors to config issues; migrate pipelines from Census, Hightouch, or custom scripts to drt YAML.
Build dbt data models with dimensional patterns, staging/marts organization, and tests; deploy and manage Fly.io apps using Docker, fly.toml, volumes, secrets, and multi-region setups for Python/Node/Rails/Django; design services via customer journey maps, blueprints, and touchpoints; apply strategy frameworks like RICE, ICE, and Ansoff for prioritization and growth planning.
Automate Omni Analytics workflows via REST API and embed SDK: build/edit semantic models in YAML, run queries on semantic layer, embed dashboards with custom themes/filters, administer users/permissions/schedules, optimize models for AI agents, evaluate query accuracy, and export metrics to Snowflake/Databricks using CLI skills and specialized agents.
Initialize drt Reverse ETL projects with BigQuery or PostgreSQL, generate YAML sync configs from dbt models or SQL to destinations like Slack, HubSpot, GitHub Actions, or databases, debug auth timeouts and config errors, and migrate pipelines from Census or Hightouch.
Streamline end-to-end data science and ML workflows: frame business problems into ML tasks, preprocess and validate data with quality checks, perform EDA on diverse formats, design and execute experiments with hyperparameter tuning via Optuna and interpretability via SHAP, audit reproducibility and leakage, evaluate model performance and readiness for deployment, generate model cards, and extract structured learnings into docs.
Design AWS and GCP infrastructure using Terraform and Ansible patterns, build data pipelines with dbt and SQLMesh, generate and manage RFCs plus technical specs in Markdown, and automate local dev setups including direnv, git worktrees, and port allocations for Docker services.