From python-development
Profile and optimize Python code using cProfile, memory profilers, and performance best practices. TRIGGER WHEN: debugging slow Python code, optimizing bottlenecks, or improving application performance. DO NOT TRIGGER WHEN: the task is outside the specific scope of this component.
npx claudepluginhub acaprino/alfio-claude-plugins --plugin python-developmentThis skill uses the workspace's default tool permissions.
Profile, analyze, and optimize Python code for better performance - CPU profiling, memory optimization, and implementation best practices.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides MCP server integration in Claude Code plugins via .mcp.json or plugin.json configs for stdio, SSE, HTTP types, enabling external services as tools.
Profile, analyze, and optimize Python code for better performance - CPU profiling, memory optimization, and implementation best practices.
import time
import timeit
# Simple timing
start = time.time()
result = sum(range(1000000))
print(f"Execution time: {time.time() - start:.4f} seconds")
# Accurate benchmarking with timeit
execution_time = timeit.timeit("sum(range(1000000))", number=100)
print(f"Average time: {execution_time/100:.6f} seconds")
python -m cProfile -o output.prof script.py
python -m pstats output.prof
pip install line-profiler
kernprof -l -v script.py
pip install memory-profiler
python -m memory_profiler script.py
pip install py-spy
py-spy record -o profile.svg -- python script.py
py-spy top --pid 12345
str.join() over += concatenationfunctools.lru_cache for expensive pure functionsweakref.WeakValueDictionary for GC-friendly cachestracemalloc for detecting memory leaks (snapshot comparison)weakref caches to allow garbage collectionexecutemany() and single commitSELECT *)EXPLAIN QUERY PLAN for analysisfrom functools import wraps
import time
def benchmark(func):
"""Decorator to benchmark function execution."""
@wraps(func)
def wrapper(*args, **kwargs):
start = time.perf_counter()
result = func(*args, **kwargs)
elapsed = time.perf_counter() - start
print(f"{func.__name__} took {elapsed:.6f} seconds")
return result
return wrapper
For pytest-based benchmarking: pip install pytest-benchmark
references/optimization-patterns.md - detailed code examples for all profiling tools, optimization patterns (list comprehensions, generators, string concat, dict lookups, local vars, function call overhead), advanced optimization (NumPy, lru_cache, slots, multiprocessing, async I/O), database optimization, memory leak detection, and benchmarking tools