From jeremylongshore-claude-code-plugins-plus-skills
Provides step-by-step guidance, code, and configurations for Kafka stream processors in data pipelines including ETL and streaming. Activates on Kafka stream processor mentions.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin langchain-py-packThis skill is limited to using the following tools:
This skill provides automated assistance for kafka stream processor tasks within the Data Pipelines domain.
Guides real-time data processing pipelines, Kafka/Flink framework selection, event-driven architectures. Covers batch vs streaming, watermarks, windows, state management.
Manages Kafka producer and consumer operations with step-by-step guidance, production-ready code, and configurations for Node.js, Python, or Go backends. Activates on Kafka mentions.
Guides production Spark Structured Streaming pipelines on Databricks: Kafka ingestion, triggers, watermarks, checkpoints, stream joins, multi-sink writes, merges, and performance tuning.
Share bugs, ideas, or general feedback.
This skill provides automated assistance for kafka stream processor tasks within the Data Pipelines domain.
This skill activates automatically when you:
Example: Basic Usage Request: "Help me with kafka stream processor" Result: Provides step-by-step guidance and generates appropriate configurations
| Error | Cause | Solution |
|---|---|---|
| Configuration invalid | Missing required fields | Check documentation for required parameters |
| Tool not found | Dependency not installed | Install required tools per prerequisites |
| Permission denied | Insufficient access | Verify credentials and permissions |
Part of the Data Pipelines skill category. Tags: etl, airflow, spark, streaming, data-engineering