Performance optimization analysis and improvements
Analyzes performance bottlenecks and implements systematic optimizations across code, database, and infrastructure.
/plugin marketplace add Benny9193/devflow/plugin install benny9193-devflow@Benny9193/devflowSystematic approach to identifying and fixing performance issues.
Before optimizing, measure current performance:
# Node.js
node --prof app.js
# Then analyze: node --prof-process isolate-*.log
# Python
python -m cProfile -o profile.stats app.py
# Visualize: snakeviz profile.stats
# Web
lighthouse https://yoursite.com --output json
// BAD: N+1 queries
const users = await User.findAll();
for (const user of users) {
user.posts = await Post.findAll({ where: { userId: user.id } });
}
// GOOD: Single query with join
const users = await User.findAll({
include: [{ model: Post }]
});
-- Add indexes for frequent queries
CREATE INDEX idx_posts_user_id ON posts(user_id);
CREATE INDEX idx_posts_created ON posts(created_at DESC);
// In-memory cache with TTL
const cache = new Map();
async function getCached<T>(key: string, ttlMs: number, fn: () => Promise<T>): Promise<T> {
const cached = cache.get(key);
if (cached && Date.now() < cached.expiry) {
return cached.value;
}
const value = await fn();
cache.set(key, { value, expiry: Date.now() + ttlMs });
return value;
}
// Redis cache for distributed systems
await redis.setex(`user:${id}`, 3600, JSON.stringify(user));
// Dynamic imports for code splitting
const HeavyComponent = lazy(() => import('./HeavyComponent'));
// Intersection Observer for images
const observer = new IntersectionObserver((entries) => {
entries.forEach(entry => {
if (entry.isIntersecting) {
const img = entry.target;
img.src = img.dataset.src;
observer.unobserve(img);
}
});
});
// React
const MemoizedComponent = React.memo(Component);
const memoizedValue = useMemo(() => expensiveCalc(a, b), [a, b]);
const memoizedCallback = useCallback(() => doSomething(a), [a]);
// Generic memoization
function memoize<T extends (...args: any[]) => any>(fn: T): T {
const cache = new Map();
return ((...args) => {
const key = JSON.stringify(args);
if (!cache.has(key)) cache.set(key, fn(...args));
return cache.get(key);
}) as T;
}
// BAD: Sequential
const a = await fetchA();
const b = await fetchB();
const c = await fetchC();
// GOOD: Parallel
const [a, b, c] = await Promise.all([
fetchA(),
fetchB(),
fetchC()
]);
// webpack.config.js
module.exports = {
optimization: {
splitChunks: {
chunks: 'all',
cacheGroups: {
vendor: {
test: /[\\/]node_modules[\\/]/,
name: 'vendors',
chunks: 'all',
},
},
},
},
};
// Enable gzip/brotli
app.use(compression());
// Compress images
// Use WebP, AVIF formats
// Implement responsive images
<picture>
<source srcset="image.avif" type="image/avif">
<source srcset="image.webp" type="image/webp">
<img src="image.jpg" alt="Fallback">
</picture>
| Problem | Before | After |
|---|---|---|
| Search in array | O(n) linear | O(log n) binary search |
| Lookup by key | O(n) array scan | O(1) Map/object |
| Deduplication | O(n²) nested loop | O(n) Set |
| Sorting | O(n²) bubble | O(n log n) built-in |
// BAD: O(n) lookup
const user = users.find(u => u.id === targetId);
// GOOD: O(1) lookup
const userMap = new Map(users.map(u => [u.id, u]));
const user = userMap.get(targetId);
After each optimization:
# Performance Optimization Report
## Baseline
- Average response time: 450ms
- p95 response time: 1200ms
- Memory usage: 512MB
## Changes Made
1. Added database indexes (+40% query speed)
2. Implemented Redis caching (+60% response time)
3. Fixed N+1 queries (-80% DB calls)
## Results
- Average response time: 120ms (-73%)
- p95 response time: 300ms (-75%)
- Memory usage: 380MB (-26%)
## Remaining Opportunities
1. ...
2. ...