From nickcrew-claude-ctx-plugin
Optimizes Python code via profiling, algorithms, data structures, caching, generators, and acceleration like NumPy/Numba/Cython. Use for slow execution, high memory/CPU, or latency issues.
npx claudepluginhub nickcrew/claude-cortexThis skill uses the workspace's default tool permissions.
Expert guidance for profiling, optimizing, and accelerating Python applications through systematic analysis, algorithmic improvements, efficient data structures, and acceleration techniques.
Profiles and optimizes Python code using cProfile, memory profilers, and best practices. Use for debugging slow code, bottlenecks, CPU/memory usage, I/O, and data pipelines.
Profiles and optimizes Python code using cProfile, memory profilers, and performance best practices. Use for debugging slow code, bottlenecks, CPU/memory issues, or improving app performance.
Profiles and optimizes Python code using cProfile, timeit, memory profilers, and best practices for bottlenecks, slow execution, high memory, and latency.
Share bugs, ideas, or general feedback.
Expert guidance for profiling, optimizing, and accelerating Python applications through systematic analysis, algorithmic improvements, efficient data structures, and acceleration techniques.
The Golden Rule: Never optimize without profiling first. 80% of execution time is spent in 20% of code.
Optimization Hierarchy (in priority order):
Key Principle: Algorithmic improvements beat micro-optimizations every time.
Load detailed guides for specific optimization areas:
| Task | Load reference |
|---|---|
| Profile code and find bottlenecks | skills/python-performance-optimization/references/profiling.md |
| Algorithm and data structure optimization | skills/python-performance-optimization/references/algorithms.md |
| Memory optimization and generators | skills/python-performance-optimization/references/memory.md |
| String concatenation and file I/O | skills/python-performance-optimization/references/string-io.md |
| NumPy, Numba, Cython, multiprocessing | skills/python-performance-optimization/references/acceleration.md |
@lru_cache for expensive functions# Slow: O(n) lookup
if item in large_list: # Bad
# Fast: O(1) lookup
if item in large_set: # Good
# Slower
result = []
for i in range(n):
result.append(i * 2)
# Faster (35% speedup)
result = [i * 2 for i in range(n)]
from functools import lru_cache
@lru_cache(maxsize=None)
def expensive_function(n):
# Result cached automatically
return complex_calculation(n)
# Memory inefficient
def read_file(path):
return [line for line in open(path)] # Loads entire file
# Memory efficient
def read_file(path):
for line in open(path): # Streams line by line
yield line.strip()
# Pure Python: ~500ms
result = sum(i**2 for i in range(1000000))
# NumPy: ~5ms (100x faster)
import numpy as np
result = np.sum(np.arange(1000000)**2)
"".join() or StringIOStart here: Profile with cProfile to find bottlenecks
Hot path is algorithm?
Hot path is computation?
Hot path is memory?
__slots__, object pooling@lru_cache or custom cacheHot path is I/O?
@lru_cache or custom caching