Data engineering agent that creates, extends, and explores dlt data pipelines
npx claudepluginhub dlt-hub/dlthub-ai-workbenchBuild REST API pipelines with dlt: scope, debug and validate data
Prepare Python environment for dlthub workspace
Quick insights from dlt pipeline data. Connect to a pipeline, profile tables, plan charts, and assemble marimo dashboards.
Deploy dlt workspace and pipelines to the dltHub platform
Shared rules, secrets handling, and workspace MCP for dlt
Transform raw dlt pipeline data into a Canonical Data Model. Build an ontology, design a CDM with Kimball dimensional modeling, write @dlt.hub.transformation functions, and validate the output.
Claude Code marketplace entries for the plugin-safe Antigravity Awesome Skills library and its compatible editorial bundles.
Directory of popular Claude Code extensions including development tools, productivity plugins, and MCP integrations
No description available.
Share bugs, ideas, or general feedback.
dlt (data load tool) is an open-source Python library for loading data from APIs and databases into a warehouse or lakehouse. dltHub (paid platform) extends dlt with enterprise-grade features tailored to the needs of coding agents: transformations, data quality validation, managed runtime infrastructure, managed data apps, and an AI-powered workspace environment.

The dltHub AI Workbench is a collection of toolkits that give AI coding assistants step-by-step workflows to build data pipelines with dlt. You can use the workbench as-is or fork and customize it for your own stack. The dlt ai CLI installs toolkit components into the right locations for your assistant and runs the workspace MCP server.
Build toolkits cover ingestion (REST API, SQL), transformation, and data quality; Run toolkits handle deployment and exploration. The REST API toolkit is backed by the dltHub context — over 9,700 source definitions the agent queries to find verified connectors before writing code.
The dltHub AI Workbench is tested with Claude Code, Cursor, and Codex and may work with other AI coding assistants. We recommend workings in accept edits (Claude) / --approval-mode (Codex) mode to review the changes and familiarizing with dlthub AI workflows when getting started with the dlthub AI workbench.
Building data pipelines is iterative and covers two major phases — ingestion and transformations — each following the same inner loop:
Build (local development)
Run (production)
The outer loop connects the two phases: insights from the transformation and serving layer feed back into ingestion refinement. The workbench Build toolkits support the local development loop; the Run toolkits handle deployment and data apps.

The workbench gives your coding assistant toolkits — that contain a structured, guided workflow for a specific phase. Instead of generating ad-hoc code, the assistant follows a defined sequence of steps from start to finish.
A Toolkit contains skills, commands, rules, and an MCP server — tied together by a workflow that tells the assistant which skill to run at each step and how to leverage the MCP.
All toolkits depend on init for shared rules, secrets handling, and the MCP server. When using the dlt ai CLI, init is installed automatically as a dependency. When using the Claude marketplace, install the init plugin separately.

| Component | What it is | When it runs |
|---|---|---|
| Skill | Step-by-step procedure the assistant follows | Triggered by user intent or explicitly with /skill-name |
| Command | A slash command for a specific action | User invokes with /toolkit:command |
| Rule | Always-on context (conventions, constraints) | Every session, automatically |
| Workflow | Ordered sequence of skills with a fixed entry point | Loaded as a rule — always active |
| MCP server | Exposes pipelines, tables, and secrets as tools | During a session, via MCP protocol |
| dltHub context | 9,700+ REST API source definitions with verified connectors and pipeline patterns | During source discovery, via search_dlthub_sources |
Two MCP servers give the agent structured context throughout the workflow to avoid the need for manual copy-pasting.
dlt-workspace-mcp (local, installed by dlt ai init) exposes: data inspection tools (list_tables, preview_table, execute_sql_query, get_row_counts, display_schema, get_local_pipeline_state), secrets tools (secrets_view_redacted, secrets_update_fragment), and toolkit discovery (list_toolkits, toolkit_info).
dltHub context (remote) provides search_dlthub_sources — used by the find-source skill to search 9,700+ REST API source definitions and return verified connectors with reference links before writing code.