From playbooks-virtuoso
Optimizes application performance using profiling-driven methodology: CPU/memory profiling, caching strategies, query optimization, indexing, lazy loading, connection pooling, and load testing.
npx claudepluginhub krzysztofsurdy/code-virtuoso --plugin playbooks-virtuosoThis skill is limited to using the following tools:
Performance work follows one rule above all others: measure before you change anything. Intuition about bottlenecks is wrong more often than it is right. Every optimization should start with profiling, produce a hypothesis, apply a targeted fix, and verify with another measurement.
Profiles and optimizes frontend, backend, and database performance for scalable apps. Use for slow response times, scaling bottlenecks via Chrome DevTools, React Profiler, Node/Python profilers, Postgres/Mongo queries.
Identifies performance bottlenecks and optimizes via profiling, caching strategies, database query tuning, and language-specific tools for Python, Rust, JS/Node.js, Go.
Identifies and fixes performance bottlenecks in code, databases, and APIs. Measures before/after execution times, profiles with DevTools/Node, optimizes queries/indexes/caching. Use for slow apps, APIs, or DB queries.
Share bugs, ideas, or general feedback.
Performance work follows one rule above all others: measure before you change anything. Intuition about bottlenecks is wrong more often than it is right. Every optimization should start with profiling, produce a hypothesis, apply a targeted fix, and verify with another measurement.
| Principle | Meaning |
|---|---|
| Measure first | Never optimize without profiling data -- gut feelings about bottlenecks are unreliable |
| Optimize the critical path | Focus on the code that runs most frequently or blocks user-visible latency |
| Set budgets | Define acceptable latency, throughput, and resource usage before you start |
| Avoid premature optimization | Readable, correct code first -- optimize only when measurements show a real problem |
| Know your tradeoffs | Every optimization trades something (memory for speed, complexity for throughput, freshness for latency) |
Profiling identifies where time and resources are spent. Without it, you are guessing.
| Type | What It Reveals | When to Use |
|---|---|---|
| CPU profiling | Hot functions, call frequency, execution time distribution | Slow request handling, high CPU usage |
| Memory profiling | Allocation rates, heap size, object retention, leaks | Growing memory usage, OOM errors, GC pressure |
| I/O profiling | Disk reads/writes, network calls, blocking waits | Slow file operations, external service latency |
| Database profiling | Query execution time, query count per request, slow queries | High DB load, N+1 patterns, missing indexes |
Define limits that trigger action when exceeded:
See Profiling Patterns Reference for detailed profiling workflows, bottleneck signatures, and load testing strategies.
Caching eliminates redundant computation and data fetching by storing results closer to where they are needed.
| Layer | Location | Latency | Use Case |
|---|---|---|---|
| L1 -- In-process | Application memory (object cache, memoization) | Nanoseconds | Hot data accessed many times per request |
| L2 -- Distributed | Redis, Memcached, shared cache | Sub-millisecond to low milliseconds | Data shared across application instances |
| HTTP cache | Browser, reverse proxy (Varnish, Nginx) | Zero network round-trip for client cache | Static assets, cacheable API responses |
| CDN | Edge servers worldwide | Low latency from geographic proximity | Static files, pre-rendered pages, media |
| Database cache | Query result cache, buffer pool | Varies | Repeated identical queries |
| Strategy | How It Works | Best For |
|---|---|---|
| TTL-based | Cache entries expire after a fixed duration | Data that tolerates bounded staleness |
| Event-based | Cache is cleared when the source data changes | Data that must stay fresh after writes |
| Write-through | Writes update both the cache and the backing store simultaneously | Read-heavy workloads needing strong consistency |
| Write-behind | Writes update the cache immediately; backing store is updated asynchronously | High write throughput where eventual consistency is acceptable |
When a popular cache key expires, many concurrent requests may all try to regenerate it at once, overwhelming the backend. Three approaches prevent this:
See Caching Strategies Reference for implementation patterns with multi-language examples.
Database queries are the most common performance bottleneck in web applications.
The N+1 problem occurs when code fetches a list of N records, then issues one additional query per record to load related data. Instead of 1 query, you execute N+1.
Detection signals:
Prevention strategies:
Opening a database connection is expensive (TCP handshake, authentication, TLS negotiation). Connection pools maintain a set of reusable connections:
See Database Optimization Reference for query patterns, explain plan analysis, and multi-language examples.
| Pattern | Description |
|---|---|
| Object pooling | Reuse expensive objects instead of allocating and discarding them |
| Streaming | Process large datasets as streams instead of loading everything into memory |
| Lazy initialization | Defer creation of expensive objects until they are actually needed |
| Weak references | Hold references that do not prevent garbage collection |
| Buffer reuse | Allocate buffers once and reuse them across operations |
Lazy loading defers work until the result is actually needed. It reduces startup time and memory usage but adds complexity and can cause unexpected latency later.
Where lazy loading helps:
Where lazy loading hurts:
Replace individual operations with batch alternatives wherever possible:
| Symptom | Likely Cause | First Investigation Step |
|---|---|---|
| Slow response times, low CPU | I/O waits (database, network, disk) | Profile I/O and check query logs |
| High CPU, normal response times | Inefficient algorithms or excessive computation | CPU profile to find hot functions |
| Growing memory over time | Memory leak (unreleased references, unbounded caches) | Heap dump comparison over time |
| Intermittent slowness under load | Resource contention (locks, connection pool exhaustion) | Check pool sizes and lock wait times |
| Fast locally, slow in production | Network latency, missing caches, different data volumes | Compare profiling data between environments |
| Reference | Contents |
|---|---|
| Caching Strategies | Cache layers, invalidation patterns, stampede prevention with multi-language examples |
| Database Optimization | Query optimization, N+1 prevention, connection pooling, batch operations with multi-language examples |
| Profiling Patterns | Profiling workflows, bottleneck signatures, performance budgets, load testing strategies |
| Situation | Recommended Skill |
|---|---|
| Performance issues caused by poor architecture | Install knowledge-virtuoso from krzysztofsurdy/code-virtuoso for clean architecture guidance |
| Need to refactor slow code paths | Install knowledge-virtuoso from krzysztofsurdy/code-virtuoso for refactoring techniques |
| API response time optimization | Install knowledge-virtuoso from krzysztofsurdy/code-virtuoso for API design principles |
| Database schema and query design | Install knowledge-virtuoso from krzysztofsurdy/code-virtuoso for testing strategies to verify optimizations |