This skill should be used when the user asks about "worker concurrency", "task queue", "worker performance", "worker scaling", "MaxConcurrentActivityExecutionSize", "MaxConcurrentWorkflowTaskExecutionSize", "poller configuration", or needs guidance on optimizing Temporal worker performance.
From timelordnpx claudepluginhub therealbill/mynet --plugin timelordThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Guidance for configuring and optimizing Temporal worker performance.
Key configuration parameters:
| Option | Default | Purpose |
|---|---|---|
MaxConcurrentActivityExecutionSize | 1000 | Max parallel activities |
MaxConcurrentWorkflowTaskExecutionSize | 1000 | Max parallel workflow tasks |
MaxConcurrentLocalActivityExecutionSize | 1000 | Max parallel local activities |
WorkerActivitiesPerSecond | 100000 | Activity rate limit |
MaxConcurrentActivityTaskPollers | 2 | Activity poll goroutines |
MaxConcurrentWorkflowTaskPollers | 2 | Workflow poll goroutines |
import (
"go.temporal.io/sdk/worker"
)
func createWorker(c client.Client) worker.Worker {
options := worker.Options{
// Concurrency limits
MaxConcurrentActivityExecutionSize: 100,
MaxConcurrentWorkflowTaskExecutionSize: 100,
MaxConcurrentLocalActivityExecutionSize: 100,
// Rate limiting
WorkerActivitiesPerSecond: 1000,
// Pollers
MaxConcurrentActivityTaskPollers: 5,
MaxConcurrentWorkflowTaskPollers: 5,
}
return worker.New(c, "my-task-queue", options)
}
Factors to consider:
Calculation:
MaxConcurrentActivityExecutionSize = min(
Available CPU cores × activities per core,
External service rate limit,
Database connection pool size,
Available memory / memory per activity
)
Example configurations:
| Activity Type | Recommended Concurrency |
|---|---|
| CPU-bound | 2 × CPU cores |
| I/O-bound (fast) | 50-200 |
| I/O-bound (slow) | 10-50 |
| External API calls | API rate limit / workers |
Workflow tasks are typically fast (< 1s). Higher concurrency is usually safe:
MaxConcurrentWorkflowTaskExecutionSize: 1000, // Default is good
Reduce if workflows have expensive queries or local activities.
Pollers per task queue:
// For high-throughput queues
MaxConcurrentActivityTaskPollers: 10,
MaxConcurrentWorkflowTaskPollers: 10,
// For low-throughput queues
MaxConcurrentActivityTaskPollers: 2,
MaxConcurrentWorkflowTaskPollers: 2,
Rule of thumb: More pollers help with latency-sensitive workloads but consume more connections.
Separate task queues for different activity types:
// CPU-intensive activities
cpuWorker := worker.New(c, "cpu-intensive", worker.Options{
MaxConcurrentActivityExecutionSize: 4, // Limited to CPU cores
})
cpuWorker.RegisterActivity(RenderVideo)
cpuWorker.RegisterActivity(ProcessImage)
// I/O-bound activities
ioWorker := worker.New(c, "io-bound", worker.Options{
MaxConcurrentActivityExecutionSize: 100, // Higher concurrency
})
ioWorker.RegisterActivity(FetchData)
ioWorker.RegisterActivity(SendEmail)
// High priority - more workers
highPriorityWorker := worker.New(c, "orders-high-priority", worker.Options{
MaxConcurrentActivityExecutionSize: 50,
})
// Normal priority
normalWorker := worker.New(c, "orders-normal", worker.Options{
MaxConcurrentActivityExecutionSize: 20,
})
Workflows prefer returning to the same worker (sticky execution). Configure cache:
worker.Options{
StickyScheduleToStartTimeout: 5 * time.Second, // How long to wait for sticky worker
}
// Limit concurrent executions to control memory
worker.Options{
MaxConcurrentActivityExecutionSize: 50, // Each activity uses memory
MaxConcurrentWorkflowTaskExecutionSize: 100, // Workflow state in memory
}
Estimate memory:
// For CPU-bound activities
runtime.GOMAXPROCS(4) // Limit Go runtime to 4 cores
worker.Options{
MaxConcurrentActivityExecutionSize: 4, // Match GOMAXPROCS
}
Each poller maintains connections. Balance pollers vs. connections:
// Total connections ≈ pollers × 2
worker.Options{
MaxConcurrentActivityTaskPollers: 5, // ~10 connections
MaxConcurrentWorkflowTaskPollers: 5, // ~10 connections
}
Scale workers across nodes:
# Kubernetes deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: temporal-worker
spec:
replicas: 10 # Scale horizontally
template:
spec:
containers:
- name: worker
resources:
requests:
cpu: "2"
memory: "2Gi"
limits:
cpu: "4"
memory: "4Gi"
Scale based on task queue backlog:
# HPA based on custom metrics
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: worker-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: temporal-worker
minReplicas: 2
maxReplicas: 20
metrics:
- type: External
external:
metric:
name: temporal_task_queue_depth
selector:
matchLabels:
task_queue: "my-queue"
target:
type: Value
value: "100"
Increase resources and concurrency:
// Larger instance, higher concurrency
worker.Options{
MaxConcurrentActivityExecutionSize: 200,
MaxConcurrentWorkflowTaskExecutionSize: 500,
MaxConcurrentActivityTaskPollers: 20,
MaxConcurrentWorkflowTaskPollers: 20,
}
# Activity execution rate
rate(temporal_worker_activity_task_execution_total[5m])
# Schedule-to-start latency (waiting for worker)
histogram_quantile(0.99, rate(temporal_activity_schedule_to_start_latency_bucket[5m]))
# Execution latency
histogram_quantile(0.99, rate(temporal_activity_execution_latency_bucket[5m]))
# Task queue backlog
temporal_task_queue_depth{task_queue="my-queue"}
| Metric | Healthy | Action if unhealthy |
|---|---|---|
| Schedule-to-start p99 | < 100ms | Add workers or pollers |
| Execution latency p99 | < timeout | Optimize activity |
| Task queue depth | Near 0 | Add workers |
| Worker CPU | < 80% | Increase concurrency |
Causes:
Solutions:
MaxConcurrentActivityTaskPollersCauses:
Solutions:
MaxConcurrentActivityExecutionSizeCauses:
Solutions:
worker.Options{
MaxConcurrentActivityExecutionSize: 200,
MaxConcurrentWorkflowTaskExecutionSize: 500,
MaxConcurrentActivityTaskPollers: 20,
MaxConcurrentWorkflowTaskPollers: 20,
}
worker.Options{
MaxConcurrentActivityExecutionSize: 10,
MaxConcurrentWorkflowTaskExecutionSize: 50,
MaxConcurrentActivityTaskPollers: 2,
MaxConcurrentWorkflowTaskPollers: 2,
}
worker.Options{
MaxConcurrentActivityExecutionSize: 50,
WorkerActivitiesPerSecond: 100, // Rate limit
MaxConcurrentActivityTaskPollers: 5,
}
For detailed tuning patterns, consult:
references/scaling-patterns.md - Advanced scaling strategiesreferences/performance-testing.md - Load testing approaches