From jeremylongshore-claude-code-plugins-plus-skills
Generates Flink job creators for data pipelines, including ETL, streaming data processing, workflow orchestration, with step-by-step guidance and production-ready code.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin langchain-py-packThis skill is limited to using the following tools:
This skill provides automated assistance for flink job creator tasks within the Data Pipelines domain.
Generates code, configurations, and guidance for Spark job creation in data pipelines covering ETL, transformations, orchestration, and streaming processing.
Develops Lakeflow Spark Declarative Pipelines (formerly Delta Live Tables) on Databricks for batch or streaming data pipelines using Python or SQL. Invoke before implementation.
Guides writing, packaging, and executing Apache Beam pipelines on GCP Dataflow, including Flex Templates for Java, Python, and Go projects.
Share bugs, ideas, or general feedback.
This skill provides automated assistance for flink job creator tasks within the Data Pipelines domain.
This skill activates automatically when you:
Example: Basic Usage Request: "Help me with flink job creator" Result: Provides step-by-step guidance and generates appropriate configurations
| Error | Cause | Solution |
|---|---|---|
| Configuration invalid | Missing required fields | Check documentation for required parameters |
| Tool not found | Dependency not installed | Install required tools per prerequisites |
| Permission denied | Insufficient access | Verify credentials and permissions |
Part of the Data Pipelines skill category. Tags: etl, airflow, spark, streaming, data-engineering