Use when code is reported slow, before suggesting any optimization, when choosing between cprof/eprof/fprof/tprof, or when creating Benchee benchmarks to compare approaches
Analyzes Elixir performance issues by selecting the correct profiler and creating benchmarks before suggesting any optimizations.
/plugin marketplace add jeffweiss/elixir-production/plugin install elixir-production@jeffweissThis skill inherits all available tools. When active, it can use any tool Claude has access to.
beam-efficiency.mdbeam-gc.mdbenchmarking.mdlatency.mdprofiling.mdNO OPTIMIZATION WITHOUT PROFILING DATA. Code review cannot tell you where bottlenecks are — only measurement can. Refuse to suggest optimizations without profiling results.
| What You Need | Tool | Command |
|---|---|---|
| Function call frequency | cprof | mix profile.cprof -e "Code.here()" |
| Time per function | eprof | mix profile.eprof -e "Code.here()" |
| Detailed call tree | fprof | mix profile.fprof -e "Code.here()" |
| Memory allocations (OTP 27+) | tprof | mix profile.tprof -e "Code.here()" --type memory |
Start with eprof (lower overhead). Use fprof only when you need call trees.
Do you have a number for "how slow"?
NO → L0: Measure first (Benchee baseline)
YES → Know WHERE time is spent?
NO → L1: Profile (eprof/fprof)
YES → Algorithmic problem (O(n²)+)?
YES → L2: Algorithm/data structure fix
NO → CPU-bound?
YES → L3: BEAM opts (Task.async_stream, ETS)
NO → Database/I/O?
YES → L4: DB optimization (preload, indexes)
NO → L5: System tuning (last resort)
Read the file that matches your current problem:
profiling.md — When: About to profile code or choosing a profiler. Iron Law enforcement, profiler usage, escalation ladder L0-L5, common patternsbenchmarking.md — When: Comparing implementation alternatives with Benchee. Benchee templates, complexity analysis, validation workflowlatency.md — When: Investigating tail latency or fan-out amplification. Tail latency reduction (hedged requests), fan-out amplification, measurement pitfalls, pool sizingbeam-gc.md — When: Suspecting garbage collection pressure or large heaps. Per-process GC, ETS for heap reduction, 4 mitigation techniquesbeam-efficiency.md — When: Hitting BEAM-specific performance pitfalls (binaries, maps, lists). BEAM performance pitfalls: Seven Myths, binary handling (IO lists, append vs prepend), map efficiency (32-key threshold, key sharing), list O(n) traps, process data copying, atom table, NIFs/benchmark — Create and run Benchee benchmarks for comparison/review [file] — Review code including performance analysisActivates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.