By dlt-hub
Build, extend, debug, validate, productionize, and explore REST API data pipelines using dlt: scaffold from APIs or sources, add endpoints, inspect runs/errors, check schemas/data quality, query loaded datasets, manage workspaces via MCP.
npx claudepluginhub dlt-hub/dlthub-ai-workbench --plugin rest-api-pipelineAdjust a working dlt pipeline for production — remove dev limits, verify pagination, configure incremental loading, expand date ranges. Use when the user wants to remove .add_limit(), load more data, fix pagination, or set up incremental loading.
Create a dlt REST API pipeline. Use for the rest_api core source, or any generic REST/HTTP API source. Not for sql_database or filesystem sources.
Debug and inspect a dlt pipeline after running it. Use after a pipeline run (success or failure) to inspect traces, load packages, schema, data, and diagnose errors like missing credentials or failed jobs.
Find a dlt source for a given API or data provider. Use when the user asks about a source, wants to find a connector, or asks to implement a pipeline for a specific data source.
Add a new REST API endpoint/resource to an existing dlt pipeline. Use when the user wants to pull additional data from an API that already has a working pipeline.
Validate schema and data after a successful dlt pipeline load. Use when the user wants to check if loaded data looks correct, inspect table schemas, fix data types, flatten nested structures, or refine the data shape.
Query, explore, or view data loaded by a dlt pipeline. Use when the user asks to query data, explore loaded tables, check row counts, write Python that reads pipeline data, or asks questions like "show me the data", "what users are there", "how much did we spend". Covers dlt dataset API, ibis expressions, and ReadableRelation.
Share bugs, ideas, or general feedback.
Data engineering and ETL tools. Includes 3 specialized agents, 4 commands, and 19 skills.
Data engineering plugin - warehouse exploration, pipeline authoring, Airflow integration
Skills for drt — Reverse ETL for the code-first data stack
Automated data preprocessing and cleaning pipelines
Data engineering agents providing expertise in ETL pipelines, streaming, and data warehousing