From jeremylongshore-claude-code-plugins-plus-skills
Provides step-by-step guidance, code, and configurations for data partitioner operations in data pipelines covering ETL, transformations, orchestration, and streaming. Activates on 'data partitioner' mentions.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin langchain-py-packThis skill is limited to using the following tools:
This skill provides automated assistance for data partitioner tasks within the Data Pipelines domain.
Implements PySpark jobs, DataFrame/RDD pipelines, Spark SQL queries, structured streaming, performance tuning, partitioning, and cluster configuration for big data ETL.
Builds scalable data pipelines, modern data warehouses, and real-time streaming architectures using Spark, dbt, Airflow, Kafka, and cloud platforms like Snowflake, BigQuery.
Develop Lakeflow Spark Declarative Pipelines (formerly Delta Live Tables) on Databricks. Use when building batch or streaming data pipelines with Python or SQL. Invoke BEFORE starting implementation.
Share bugs, ideas, or general feedback.
This skill provides automated assistance for data partitioner tasks within the Data Pipelines domain.
This skill activates automatically when you:
Example: Basic Usage Request: "Help me with data partitioner" Result: Provides step-by-step guidance and generates appropriate configurations
| Error | Cause | Solution |
|---|---|---|
| Configuration invalid | Missing required fields | Check documentation for required parameters |
| Tool not found | Dependency not installed | Install required tools per prerequisites |
| Permission denied | Insufficient access | Verify credentials and permissions |
Part of the Data Pipelines skill category. Tags: etl, airflow, spark, streaming, data-engineering