Analyzes and optimizes application performance across frontend, backend, and database layers. Use when diagnosing slowness, improving load times, optimizing queries, reducing bundle size, or when asked about performance issues.
Analyzes and optimizes application performance across frontend, backend, and database layers. Use when diagnosing slowness, improving load times, optimizing queries, reducing bundle size, or when asked about performance issues.
/plugin marketplace add CloudAI-X/claude-workflow/plugin install project-starter@claude-workflowThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Copy this checklist and track progress:
Performance Optimization Progress:
- [ ] Step 1: Measure baseline performance
- [ ] Step 2: Identify bottlenecks
- [ ] Step 3: Apply targeted optimizations
- [ ] Step 4: Measure again and compare
- [ ] Step 5: Repeat if targets not met
Critical Rule: Never optimize without data. Always profile before and after changes.
# Node.js profiling
node --prof app.js
node --prof-process isolate*.log > profile.txt
# Python profiling
python -m cProfile -o profile.stats app.py
python -m pstats profile.stats
# Web performance
lighthouse https://example.com --output=json
| Category | Symptoms | Tools |
|---|---|---|
| CPU | High CPU usage, slow computation | Profiler, flame graphs |
| Memory | High RAM, GC pauses, OOM | Heap snapshots, memory profiler |
| I/O | Slow disk/network, waiting | strace, network inspector |
| Database | Slow queries, lock contention | Query analyzer, EXPLAIN |
Bundle Size:
// ❌ Import entire library
import _ from 'lodash';
// ✅ Import only needed functions
import debounce from 'lodash/debounce';
// ✅ Use dynamic imports for code splitting
const HeavyComponent = lazy(() => import('./HeavyComponent'));
Rendering:
// ❌ Render on every parent update
function Child({ data }) {
return <ExpensiveComponent data={data} />;
}
// ✅ Memoize when props don't change
const Child = memo(function Child({ data }) {
return <ExpensiveComponent data={data} />;
});
// ✅ Use useMemo for expensive computations
const processed = useMemo(() => expensiveCalc(data), [data]);
Images:
<!-- ❌ Unoptimized -->
<img src="large-image.jpg" />
<!-- ✅ Optimized -->
<img
src="image.webp"
srcset="image-300.webp 300w, image-600.webp 600w"
sizes="(max-width: 600px) 300px, 600px"
loading="lazy"
decoding="async"
/>
Database Queries:
-- ❌ N+1 Query Problem
SELECT * FROM users;
-- Then for each user:
SELECT * FROM orders WHERE user_id = ?;
-- ✅ Single query with JOIN
SELECT u.*, o.*
FROM users u
LEFT JOIN orders o ON u.id = o.user_id;
-- ✅ Or use pagination
SELECT * FROM users LIMIT 100 OFFSET 0;
Caching Strategy:
// Multi-layer caching
const getUser = async (id) => {
// L1: In-memory cache (fastest)
let user = memoryCache.get(`user:${id}`);
if (user) return user;
// L2: Redis cache (fast)
user = await redis.get(`user:${id}`);
if (user) {
memoryCache.set(`user:${id}`, user, 60);
return JSON.parse(user);
}
// L3: Database (slow)
user = await db.users.findById(id);
await redis.setex(`user:${id}`, 3600, JSON.stringify(user));
memoryCache.set(`user:${id}`, user, 60);
return user;
};
Async Processing:
// ❌ Blocking operation
app.post('/upload', async (req, res) => {
await processVideo(req.file); // Takes 5 minutes
res.send('Done');
});
// ✅ Queue for background processing
app.post('/upload', async (req, res) => {
const jobId = await queue.add('processVideo', { file: req.file });
res.send({ jobId, status: 'processing' });
});
// ❌ O(n²) - nested loops
function findDuplicates(arr) {
const duplicates = [];
for (let i = 0; i < arr.length; i++) {
for (let j = i + 1; j < arr.length; j++) {
if (arr[i] === arr[j]) duplicates.push(arr[i]);
}
}
return duplicates;
}
// ✅ O(n) - hash map
function findDuplicates(arr) {
const seen = new Set();
const duplicates = new Set();
for (const item of arr) {
if (seen.has(item)) duplicates.add(item);
seen.add(item);
}
return [...duplicates];
}
After applying optimizations, re-run profiling and compare:
Comparison Checklist:
- [ ] Run same profiling tools as baseline
- [ ] Compare metrics before vs after
- [ ] Verify no regressions in other areas
- [ ] Document improvement percentages
| Metric | Good | Needs Work | Poor |
|---|---|---|---|
| LCP | < 2.5s | 2.5-4s | > 4s |
| FID | < 100ms | 100-300ms | > 300ms |
| CLS | < 0.1 | 0.1-0.25 | > 0.25 |
| TTFB | < 800ms | 800ms-1.8s | > 1.8s |
| Metric | Target |
|---|---|
| P50 Latency | < 100ms |
| P95 Latency | < 500ms |
| P99 Latency | < 1s |
| Error Rate | < 0.1% |
After optimization, validate results:
Performance Validation:
- [ ] Metrics improved from baseline
- [ ] No functionality regressions
- [ ] No new errors introduced
- [ ] Changes are sustainable (not one-time fixes)
- [ ] Performance gains documented
If targets not met, return to Step 2 and identify remaining bottlenecks.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.