From elixir-dev
Guides Elixir OTP design: avoids GenServer bottlenecks with ETS reads, uses Task.Supervisor over async, DynamicSupervisor+Registry for dynamic processes, :pg for distribution, Broadway vs Oban for queues.
npx claudepluginhub gsmlg-dev/code-agent --plugin elixir-devThis skill uses the workspace's default tool permissions.
Paradigm shifts for OTP design. These insights challenge typical concurrency and state management patterns.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides MCP server integration in Claude Code plugins via .mcp.json or plugin.json configs for stdio, SSE, HTTP types, enabling external services as tools.
Paradigm shifts for OTP design. These insights challenge typical concurrency and state management patterns.
GENSERVER IS A BOTTLENECK BY DESIGN
A GenServer processes ONE message at a time. Before creating one, ask:
The ETS pattern: GenServer owns ETS table, writes serialize through GenServer, reads bypass it entirely with :read_concurrency.
No exceptions: Don't wrap stateless functions in GenServer. Don't create GenServer "for organization".
| Function | Use For |
|---|---|
call/3 | Synchronous requests expecting replies |
cast/2 | Fire-and-forget messages |
When in doubt, use call to ensure back-pressure. Set appropriate timeouts for call/3.
Use handle_continue/2 for post-init work—keeps init/1 fast and non-blocking.
Task.async spawns a linked process—if task crashes, caller crashes too.
| Pattern | On task crash |
|---|---|
Task.async/1 | Caller crashes (linked, unsupervised) |
Task.Supervisor.async/2 | Caller crashes (linked, supervised) |
Task.Supervisor.async_nolink/2 | Caller survives, can handle error |
Use Task.Supervisor for: Production code, graceful shutdown, observability, async_nolink.
Use Task.async for: Quick experiments, scripts, when crash-together is acceptable.
DynamicSupervisor only supports :one_for_one (dynamic children have no ordering). Use Registry for names—never create atoms dynamically:
defp via_tuple(id), do: {:via, Registry, {MyApp.Registry, id}}
PartitionSupervisor scales DynamicSupervisor for millions of children.
| Tool | Scope | Use Case |
|---|---|---|
| Registry | Single node | Named dynamic processes |
| :pg | Cluster-wide | Process groups, pub/sub |
:pg replaced deprecated :pg2. Horde provides distributed supervisor/registry with CRDTs.
| Tool | Use For |
|---|---|
| Broadway | External queues (SQS, Kafka, RabbitMQ) — data ingestion with batching |
| Oban | Background jobs with database persistence |
Broadway is NOT a job queue.
Processors are for runtime, not code organization. Dispatch to modules in handle_message, don't add processors for different message types.
one_for_all is for Broadway bugs, not your code. Your handle_message errors are caught and result in failed messages, not supervisor restarts.
Handle expected failures in the producer (connection loss, rate limits). Reserve max_restarts for unexpected bugs.
| Strategy | Children Relationship |
|---|---|
| :one_for_one | Independent |
| :one_for_all | Interdependent (all restart) |
| :rest_for_one | Sequential dependency |
Use :max_restarts and :max_seconds to prevent restart loops.
Think about failure cascades BEFORE coding.
Need state?
├── No → Plain function
└── Yes → Complex behavior?
├── No → Agent
└── Yes → Supervision?
├── No → spawn_link
└── Yes → Request/response?
├── No → Task.Supervisor
└── Yes → Explicit states?
├── No → GenServer
└── Yes → GenStateMachine
| Need | Use |
|---|---|
| Memory cache | ETS (:read_concurrency for reads) |
| Static config | :persistent_term (faster than ETS) |
| Disk persistence | DETS (2GB limit) |
| Transactions/Distribution | Mnesia |
:sys.get_state(pid) # Current state
:sys.trace(pid, true) # Trace events (TURN OFF when done!)
Phoenix, Ecto, and most libraries emit telemetry events. Attach handlers:
:telemetry.attach("my-handler", [:phoenix, :endpoint, :stop], &handle/4, nil)
Use Telemetry.Metrics + reporters (StatsD, Prometheus, LiveDashboard).
Any of these? Re-read The Iron Law and use the Abstraction Decision Tree.