From wicked-garden
This skill should be used when designing or reviewing data pipelines — ETL patterns, orchestration, and performance optimization for data workflows. Use when: - "design a data pipeline" - "review this ETL" - "optimize data processing" - "how should I orchestrate this" - "pipeline architecture"
npx claudepluginhub mikeparcewski/wicked-garden --plugin wicked-gardenThis skill uses the workspace's default tool permissions.
Design, review, and optimize data pipelines and ETL workflows.
Provides Ktor server patterns for routing DSL, plugins (auth, CORS, serialization), Koin DI, WebSockets, services, and testApplication testing.
Conducts multi-source web research with firecrawl and exa MCPs: searches, scrapes pages, synthesizes cited reports. For deep dives, competitive analysis, tech evaluations, or due diligence.
Provides demand forecasting, safety stock optimization, replenishment planning, and promotional lift estimation for multi-location retailers managing 300-800 SKUs.
Design, review, and optimize data pipelines and ETL workflows.
/wicked-garden:data:pipeline design \
--source "postgres://sales_db" \
--target "s3://data-lake/sales" \
--frequency daily
Generates: Architecture diagram, ETL logic, orchestration config, monitoring plan.
/wicked-garden:data:pipeline review path/to/pipeline/
Analyzes: Code quality, error handling, performance, maintainability.
Use when: Regular scheduled loads, historical processing Pattern: Extract → Transform → Validate → Load Tools: Airflow, Dagster, Prefect
Use when: Real-time processing, event-driven Pattern: Consume → Transform → Sink Tools: Kafka, Flink, Spark Streaming
Use when: Large datasets, only processing changes Pattern: Watermark tracking + Merge/Upsert
| Issue | Symptoms | Solution |
|---|---|---|
| Fails halfway | Partial data, inconsistent state | Staging + commit pattern |
| Duplicates | Same data loaded multiple times | Watermarks + idempotency |
| Slow processing | Misses SLA | Profile and optimize bottlenecks |
/wicked-garden:search:code "dag|pipeline"metadata.event_type="task"Pipeline engineering can leverage available integrations by capability:
| Capability | Discovery Patterns | Provides |
|---|---|---|
| Warehouses | snowflake, databricks, bigquery | Query execution, schema access |
| ETL | airbyte, fivetran, dbt | Pipeline status, model metadata |
| Observability | monte-carlo, datadog | Data quality metrics |
Discover available integrations via capability detection. Fall back to wicked-garden:data:numbers for local file analysis.
For detailed patterns: