From jeremylongshore-claude-code-plugins-plus-skills
Configures monitoring for data pipelines covering ETL, transformations, orchestration, and streaming processing. Generates production-ready code, configs, and best practices guidance.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin langchain-py-packThis skill is limited to using the following tools:
This skill provides automated assistance for pipeline monitoring setup tasks within the Data Pipelines domain.
Designs scalable, reliable data pipelines for batch and streaming processing using Airflow, Prefect, dbt, Spark, Delta Lake, and Great Expectations. Guides from ingestion to monitoring.
Builds ETL/ELT data pipelines with extraction, transformation, loading, error handling, scheduling, and monitoring. Useful for 'build ETL', 'data pipeline', 'move data from X to Y', or 'sync data'.
Designs scalable data pipelines for batch/streaming processing with ETL/ELT/Lambda architectures, Airflow/Prefect orchestration, dbt/Spark transforms, Delta Lake storage, and Great Expectations quality checks.
Share bugs, ideas, or general feedback.
This skill provides automated assistance for pipeline monitoring setup tasks within the Data Pipelines domain.
This skill activates automatically when you:
Example: Basic Usage Request: "Help me with pipeline monitoring setup" Result: Provides step-by-step guidance and generates appropriate configurations
| Error | Cause | Solution |
|---|---|---|
| Configuration invalid | Missing required fields | Check documentation for required parameters |
| Tool not found | Dependency not installed | Install required tools per prerequisites |
| Permission denied | Insufficient access | Verify credentials and permissions |
Part of the Data Pipelines skill category. Tags: etl, airflow, spark, streaming, data-engineering