From jeremylongshore-claude-code-plugins-plus-skills
Configures incremental load setups for data pipelines with step-by-step guidance, production-ready code, and configurations for ETL, transformations, orchestration, and streaming.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin langchain-py-packThis skill is limited to using the following tools:
This skill provides automated assistance for incremental load setup tasks within the Data Pipelines domain.
Designs data pipelines and ETL processes covering extraction, transformation, loading, data quality checks, orchestration, and patterns for batch, streaming, CDC, ELT. Useful for building pipelines, data flows, syncing, or moving data between systems.
Builds ETL/ELT data pipelines with extraction, transformation, loading, error handling, scheduling, and monitoring. Useful for 'build ETL', 'data pipeline', 'move data from X to Y', or 'sync data'.
Creates, configures, and updates Databricks Lakeflow Spark Declarative Pipelines (SDP/LDP) using serverless compute. Handles data ingestion via streaming tables, materialized views, CDC, SCD Type 2, and Auto Loader.
Share bugs, ideas, or general feedback.
This skill provides automated assistance for incremental load setup tasks within the Data Pipelines domain.
This skill activates automatically when you:
Example: Basic Usage Request: "Help me with incremental load setup" Result: Provides step-by-step guidance and generates appropriate configurations
| Error | Cause | Solution |
|---|---|---|
| Configuration invalid | Missing required fields | Check documentation for required parameters |
| Tool not found | Dependency not installed | Install required tools per prerequisites |
| Permission denied | Insufficient access | Verify credentials and permissions |
Part of the Data Pipelines skill category. Tags: etl, airflow, spark, streaming, data-engineering