This skill should be used when the user asks about "Ruby performance", "optimization", "profiling", "benchmarking", "memory", "garbage collection", "GC", "benchmark-ips", "stackprof", "memory_profiler", "slow code", "speed up Ruby", or needs guidance on making Ruby code faster.
Provides guidance on profiling, benchmarking, and optimizing Ruby code for performance.
npx claudepluginhub bastos/ruby-plugin-marketplaceThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/profiling-guide.mdGuide to profiling, benchmarking, and optimizing Ruby code.
Always measure before optimizing. Identify bottlenecks with profiling tools.
Compare implementations with statistical significance:
require "benchmark/ips"
Benchmark.ips do |x|
x.report("map + flatten") do
[[1, 2], [3, 4]].map { |a| a * 2 }.flatten
end
x.report("flat_map") do
[[1, 2], [3, 4]].flat_map { |a| a * 2 }
end
x.compare!
end
# Output:
# flat_map: 1234567.8 i/s
# map + flatten: 987654.3 i/s - 1.25x slower
require "stackprof"
StackProf.run(mode: :cpu, out: "tmp/stackprof.dump") do
# Code to profile
1000.times { expensive_operation }
end
# View results
# $ stackprof tmp/stackprof.dump --text
# $ stackprof tmp/stackprof.dump --method 'YourClass#method'
require "memory_profiler"
report = MemoryProfiler.report do
# Code to analyze
data = process_large_dataset
end
report.pretty_print
# Shows allocated objects, retained objects, memory by gem/file/location
# Bad: Creates many intermediate objects
def bad_join(items)
result = ""
items.each do |item|
result = result + item.to_s + ", " # Creates new strings each time
end
result
end
# Good: Modify in place
def good_join(items)
result = +"" # Unfrozen empty string
items.each do |item|
result << item.to_s << ", "
end
result
end
# Best: Use built-in
items.join(", ")
# frozen_string_literal: true
# All string literals are now frozen (immutable)
# Reduces memory by reusing string objects
name = "Alice" # Frozen, shared across uses
# Symbols are interned (shared in memory)
# Good for hash keys, identifiers
hash = { name: "Alice", age: 30 } # Symbol keys
# Strings are mutable, not shared
# Good for user data, content
hash = { "user_input" => value }
# Bad: Loads entire file into memory
File.readlines("large.txt").select { |l| l.include?("ERROR") }.first(10)
# Good: Processes line by line, stops early
File.foreach("large.txt")
.lazy
.select { |l| l.include?("ERROR") }
.first(10)
class ConnectionPool
def initialize(size:)
@available = Array.new(size) { create_connection }
@mutex = Mutex.new
end
def with_connection
conn = checkout
yield conn
ensure
checkin(conn)
end
private
def checkout
@mutex.synchronize { @available.pop }
end
def checkin(conn)
@mutex.synchronize { @available.push(conn) }
end
def create_connection
# Expensive connection creation
end
end
require "set"
# O(n) lookup
array = [1, 2, 3, 4, 5]
array.include?(3) # Slow for large arrays
# O(1) lookup
set = Set[1, 2, 3, 4, 5]
set.include?(3) # Fast
# O(1) lookup with value
hash = { 1 => true, 2 => true, 3 => true }
hash.key?(3) # Fast
# Bad: O(n*m) - nested iteration
users.each do |user|
user.orders.each do |order|
# O(n*m) iterations
end
end
# Better: Pre-group data
orders_by_user = orders.group_by(&:user_id)
users.each do |user|
user_orders = orders_by_user[user.id] || []
# O(n) + O(m) iterations
end
class ExpensiveCalculator
def result
@result ||= compute_expensive_result
end
# For methods with arguments
def calculate(n)
@cache ||= {}
@cache[n] ||= expensive_computation(n)
end
# Clear cache when needed
def clear_cache!
@result = nil
@cache = nil
end
end
# Check GC stats
GC.stat
# => { count: 42, heap_allocated_pages: 100, ... }
# Manual GC (usually not needed)
GC.start
# Disable during benchmarks (not in production)
GC.disable
# ... run benchmark ...
GC.enable
# Bad: Many short-lived objects
def bad_process(items)
items.map { |i| i.to_s }
.map { |s| s.upcase }
.map { |s| s.strip }
end
# Good: Chain operations, fewer intermediates
def good_process(items)
items.map { |i| i.to_s.upcase.strip }
end
# Best: Modify in place when possible
def best_process(items)
items.each do |i|
# Modify i in place if possible
end
end
# Increase heap slots (reduce GC frequency)
# Note: These values are examples. Profile your application first
# and adjust based on actual memory usage patterns.
RUBY_GC_HEAP_INIT_SLOTS=600000
# Increase malloc limit before GC
RUBY_GC_MALLOC_LIMIT=64000000
# Growth factor for heap
RUBY_GC_HEAP_GROWTH_FACTOR=1.25
Warning: GC tuning values should be determined through profiling your specific application. Avoid cargo-cult optimization by copying values without understanding your application's memory patterns. Always measure before and after tuning.
require "concurrent"
# Thread pool for I/O-bound work
pool = Concurrent::ThreadPoolExecutor.new(
min_threads: 5,
max_threads: 10,
max_queue: 100
)
urls.each do |url|
pool.post do
fetch_url(url)
end
end
pool.shutdown
pool.wait_for_termination
# True parallelism for CPU-bound work
ractors = data.each_slice(data.size / 4).map do |chunk|
Ractor.new(chunk) do |items|
items.map { |item| expensive_computation(item) }
end
end
results = ractors.flat_map(&:take)
require "async"
Async do
results = urls.map do |url|
Async do
fetch_url(url)
end
end.map(&:wait)
end
# Bad
result = ""
items.each { |i| result += i.to_s }
# Good
result = items.map(&:to_s).join
# Also good for large strings
io = StringIO.new
items.each { |i| io << i.to_s }
result = io.string
# Use appropriate methods
array.any? { |x| x > 5 } # Stops at first match
array.all? { |x| x > 5 } # Stops at first failure
array.find { |x| x > 5 } # Returns first match
# Avoid repeated operations
# Bad
array.count > 0 # Counts all elements
# Good
array.any? # Stops immediately
# Bad
array.select { ... }.first
# Good
array.find { ... }
# Use fetch with default
hash.fetch(:key, default_value)
hash.fetch(:key) { compute_default }
# Transform keys/values efficiently
hash.transform_keys(&:to_sym)
hash.transform_values(&:to_s)
# Merge in place
hash.merge!(other_hash) # Modifies hash
references/profiling-guide.md - Detailed profiling workflows and tool usageActivates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.