npx claudepluginhub yonatangross/orchestkit --plugin orkThis skill is limited to using the following tools:
Patterns for background task processing with Celery, ARQ, and Redis. Covers task queues, canvas workflows, scheduling, retry strategies, rate limiting, and production monitoring. Each category has individual rule files in `references/` loaded on-demand.
metadata.jsonreferences/anti-patterns.mdreferences/arq-patterns.mdreferences/canvas-workflows.mdreferences/capability-details.mdreferences/celery-config.mdreferences/monitoring-health.mdreferences/quick-start-examples.mdreferences/result-backends.mdreferences/retry-strategies.mdreferences/scheduled-tasks.mdreferences/task-routing.mdrules/_sections.mdrules/_template.mdrules/celery-canvas.mdrules/jobs-monitoring.mdrules/jobs-scheduling.mdrules/jobs-task-queue.mdrules/temporal-activities.mdrules/temporal-workflows.mdImplements Python background jobs with task queues (Celery, RQ), workers, idempotency, and state machines for async tasks like emails, reports, and media processing.
Provides Celery expertise for distributed task queues: async processing, workflows (chains/groups/chords), Redis/RabbitMQ brokers, Celery Beat scheduling, Flower monitoring, retries, security, and optimization.
Implements durable Temporal workflows using Python SDK: sagas, distributed transactions, async/await, testing strategies, and production deployment.
Share bugs, ideas, or general feedback.
Patterns for background task processing with Celery, ARQ, and Redis. Covers task queues, canvas workflows, scheduling, retry strategies, rate limiting, and production monitoring. Each category has individual rule files in references/ loaded on-demand.
| Category | Rules | Impact | When to Use |
|---|---|---|---|
| Configuration | celery-config | HIGH | Celery app setup, broker, serialization, worker tuning |
| Task Routing | task-routing | HIGH | Priority queues, multi-queue workers, dynamic routing |
| Canvas Workflows | canvas-workflows | HIGH | Chain, group, chord, nested workflows |
| Retry Strategies | retry-strategies | HIGH | Exponential backoff, idempotency, dead letter queues |
| Scheduling | scheduled-tasks | MEDIUM | Celery Beat, crontab, database-backed schedules |
| Monitoring | monitoring-health | MEDIUM | Flower, custom events, health checks, metrics |
| Result Backends | result-backends | MEDIUM | Redis results, custom states, progress tracking |
| ARQ Patterns | arq-patterns | MEDIUM | Async Redis Queue for FastAPI, lightweight jobs |
| Temporal Workflows | temporal-workflows | HIGH | Durable workflow definitions, sagas, signals, queries |
| Temporal Activities | temporal-activities | HIGH | Activity patterns, workers, heartbeats, testing |
Total: 10 rules across 9 categories
@app.task(bind=True, max_retries=3, default_retry_delay=60)
def process_payment(self, order_id: str):
try:
return gateway.charge(order_id)
except TransientError as exc:
raise self.retry(exc=exc, countdown=2 ** self.request.retries * 60)
Load more examples: Read("${CLAUDE_SKILL_DIR}/references/quick-start-examples.md") for Celery retry task and ARQ/FastAPI integration patterns.
Production Celery app configuration with secure defaults and worker tuning.
task_serializer="json" for safetytask_acks_late=True to prevent task loss on crashtask_time_limit (hard) and task_soft_time_limit (soft)worker_prefetch_multiplier=1task_reject_on_worker_lost=True| Decision | Recommendation |
|---|---|
| Serializer | JSON (never pickle) |
| Ack mode | Late ack (task_acks_late=True) |
| Prefetch | 1 for fair, 4-8 for throughput |
| Time limit | soft < hard (e.g., 540/600) |
| Timezone | UTC always |
Priority queue configuration with multi-queue workers and dynamic routing.
queue_order_strategy: "priority" and 0-9 levels| Decision | Recommendation |
|---|---|
| Queue count | 3-5 (critical/high/default/low/bulk) |
| Priority levels | 0-9 with Redis x-max-priority |
| Worker assignment | Dedicated workers per queue |
| Prefetch | 1 for critical, 4-8 for bulk |
| Routing | Router class for 5+ routing rules |
Celery canvas primitives for sequential, parallel, and fan-in/fan-out workflows.
si()) for steps that ignore input| Decision | Recommendation |
|---|---|
| Sequential | Chain with s() |
| Parallel | Group for independent tasks |
| Fan-in | Chord (all must succeed for callback) |
| Ignore input | Use si() immutable signature |
| Error in chain | Reject stops chain, retry continues |
| Partial failures | Return error dict in chord tasks |
Retry patterns with exponential backoff, idempotency, and dead letter queues.
retry_backoff=True and retry_backoff_maxretry_jitter=True to prevent thundering herd| Decision | Recommendation |
|---|---|
| Retry delay | Exponential backoff with jitter |
| Max retries | 3-5 for transient, 0 for permanent |
| Idempotency | Redis key with TTL |
| Failed tasks | DLQ for manual review |
| Singleton | Redis lock with TTL |
Celery Beat periodic task configuration with crontab, database-backed schedules, and overlap prevention.
django-celery-beat for dynamic schedules| Decision | Recommendation |
|---|---|
| Schedule type | Crontab for time-based, interval for frequency |
| Dynamic | Database scheduler (django-celery-beat) |
| Overlap | Redis lock with timeout |
| Beat process | Separate process (not embedded) |
| Timezone | UTC always |
Production monitoring with Flower, custom signals, health checks, and Prometheus metrics.
task_prerun, task_postrun, task_failure) for metrics| Decision | Recommendation |
|---|---|
| Dashboard | Flower with persistent storage |
| Metrics | Prometheus via celery signals |
| Health | Broker + worker + queue depth |
| Alerting | Signal on task_failure |
| Autoscale | Queue depth > threshold |
Task result storage, custom states, and progress tracking patterns.
update_state() for real-time progress reporting| Decision | Recommendation |
|---|---|
| Status storage | Redis result backend |
| Large results | S3 or database (never Redis) |
| Progress | Custom states with update_state() |
| Result query | AsyncResult with state checks |
Lightweight async Redis Queue for FastAPI and simple background tasks.
arq for FastAPI integrationstartup/shutdown hooks for resource managementenqueue_job()Job.status() and Job.result()_delay=timedelta() for deferred execution| Decision | Recommendation |
|---|---|
| Simple async | ARQ (native async) |
| Complex workflows | Celery (chains, chords) |
| In-process quick | FastAPI BackgroundTasks |
| LLM workflows | LangGraph (not Celery) |
Load: Read("${CLAUDE_SKILL_DIR}/references/quick-start-examples.md") for the full tool comparison table (ARQ, Celery, RQ, Dramatiq, FastAPI BackgroundTasks).
Load details: Read("${CLAUDE_SKILL_DIR}/references/anti-patterns.md") for full list.
Key rules: never run long tasks in request handlers, never block on results inside tasks, never store large results in Redis, always use idempotency for retried tasks.
Durable execution engine for reliable distributed applications with Temporal.io.
@workflow.defn and deterministic codeworkflow.wait_condition() for human-in-the-loopasyncio.gather inside workflows| Decision | Recommendation |
|---|---|
| Workflow ID | Business-meaningful, idempotent |
| Determinism | Use workflow.random(), workflow.now() |
| I/O | Always via activities, never directly |
Activity and worker patterns for Temporal.io I/O operations.
@activity.defn for all I/OApplicationError(non_retryable=True) for business errorsWorkflowEnvironment.start_local()| Decision | Recommendation |
|---|---|
| Activity timeout | start_to_close for most cases |
| Error handling | Non-retryable for business errors |
| Testing | WorkflowEnvironment for integration tests |
ork:python-backend - FastAPI, asyncio, SQLAlchemy patternsork:langgraph - LangGraph workflow patterns (use for LLM workflows, not Celery)ork:distributed-systems - Resilience patterns, circuit breakersork:monitoring-observability - Metrics and alertingLoad details: Read("${CLAUDE_SKILL_DIR}/references/capability-details.md") for full keyword index and problem-solution mapping across all 8 capabilities.