From rest-api-pipeline
Adjusts dlt pipelines for production by removing .add_limit(), verifying pagination, configuring incremental loading, and expanding date ranges. Use when loading more data, fixing pagination, or preparing for full runs.
npx claudepluginhub dlt-hub/dlthub-ai-workbench --plugin rest-api-pipelineThis skill uses the workspace's default tool permissions.
Parse `$ARGUMENTS`:
Adds a new REST API endpoint or resource to an existing dlt pipeline. Use when extending data pulls from an API with a working pipeline.
Configures incremental load setups for data pipelines with step-by-step guidance, production-ready code, and configurations for ETL, transformations, orchestration, and streaming.
Build a data pipeline — ETL/ELT with extraction, transformation, loading, error handling, and scheduling. Use when asked to "build ETL", "data pipeline", "move data from X to Y", or "sync data".
Share bugs, ideas, or general feedback.
Parse $ARGUMENTS:
pipeline-name (optional): the dlt pipeline name. If omitted, infer from session context. If ambiguous, ask the user and stop.hints (optional, after --): specific adjustments to make.add_limit() requires verified pagination.add_limit(1) during development masks pagination problems — only one page is fetched, so a broken paginator never loops. Removing it without explicit pagination causes stuck pipelines.
Before removing .add_limit():
"paginator" config. If any rely on auto-detection, add one first.debug-pipeline with INFO logging for the first unlimited run to watch pagination progress and catch loops early.Pipeline worked with .add_limit(1). After removing the limit, it hung forever — dlt's auto-detected paginator looped. Fix: added explicit "paginator": {"type": "cursor", "cursor_path": "next_page", "cursor_param": "page"}. Full load then completed in 5 seconds.
explore-data to chart and analyze the data