Query optimization, indexing strategies, and database performance tuning for PostgreSQL and MySQL
Optimizes database queries and performance for PostgreSQL and MySQL using indexing strategies and query analysis.
/plugin marketplace add https://www.claudepluginhub.com/api/plugins/rohitg00-claude-code-toolkit/marketplace.json/plugin install rohitg00-claude-code-toolkit@cpd-rohitg00-claude-code-toolkitThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Always run EXPLAIN ANALYZE before optimizing. Read the output bottom-up.
-- PostgreSQL
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT) SELECT ...;
-- MySQL
EXPLAIN ANALYZE SELECT ...;
Key metrics to watch:
ANALYZECREATE INDEX idx_users_email ON users (email);
CREATE INDEX idx_orders_user_date ON orders (user_id, created_at DESC);
Use for: equality, range queries, sorting. Column order matters in composite indexes: put equality columns first, then range/sort columns.
CREATE INDEX idx_orders_pending ON orders (created_at)
WHERE status = 'pending';
Use when queries always filter on a specific condition. Dramatically smaller than full indexes.
CREATE INDEX idx_products_tags ON products USING GIN (tags);
CREATE INDEX idx_docs_search ON documents USING GIN (to_tsvector('english', content));
CREATE INDEX idx_locations_point ON locations USING GiST (coordinates);
CREATE INDEX idx_events_period ON events USING GiST (tsrange(start_at, end_at));
-- PostgreSQL
CREATE INDEX idx_users_email_name ON users (email) INCLUDE (name);
-- MySQL
CREATE INDEX idx_users_email_name ON users (email, name);
Symptom: 1 query to fetch parent + N queries for each child.
# BAD: N+1
users = db.query(User).all()
for user in users:
print(user.orders) # triggers query per user
# GOOD: eager load
users = db.query(User).options(joinedload(User.orders)).all()
// BAD: N+1
const users = await User.findAll();
for (const user of users) {
const orders = await Order.findAll({ where: { userId: user.id } });
}
// GOOD: batch load
const users = await User.findAll({ include: [Order] });
Detection: enable query logging, count queries per request. More than 10 queries for a single endpoint is a red flag.
Rule of thumb: pool_size = (core_count * 2) + disk_count
Typical web app: 10-20 connections per app instance
PostgreSQL:
idle_in_transaction_session_timeout = '30s'pg_stat_activityMySQL:
max_connections based on available RAM (each connection uses ~10MB)SHOW PROCESSLISTSELECT queries to replicas# SQLAlchemy read replica routing
class RoutingSession(Session):
def get_bind(self, mapper=None, clause=None):
if self._flushing or self.is_modified():
return engines["primary"]
return engines["replica"]
-- PostgreSQL
CREATE TABLE events (
id bigint GENERATED ALWAYS AS IDENTITY,
created_at timestamptz NOT NULL,
data jsonb
) PARTITION BY RANGE (created_at);
CREATE TABLE events_2025_q1 PARTITION OF events
FOR VALUES FROM ('2025-01-01') TO ('2025-04-01');
CREATE TABLE events_2025_q2 PARTITION OF events
FOR VALUES FROM ('2025-04-01') TO ('2025-07-01');
CREATE TABLE sessions (
id uuid PRIMARY KEY,
user_id bigint NOT NULL
) PARTITION BY HASH (user_id);
CREATE TABLE sessions_0 PARTITION OF sessions FOR VALUES WITH (MODULUS 4, REMAINDER 0);
CREATE TABLE sessions_1 PARTITION OF sessions FOR VALUES WITH (MODULUS 4, REMAINDER 1);
Partition when tables exceed 50-100GB or when you need to drop old data quickly.
EXPLAIN ANALYZE and read the planidx_scan in pg_stat_user_indexes)SELECT * with specific columnsLIMIT to queries that only need a subsetEXISTS instead of COUNT(*) > 0INSERT/UPDATE operations (500-1000 rows per batch)WHERE clauseslog_min_duration_statement = 100)LIKE '%term%' on unindexed columns (use full-text search instead)ORDER BY RANDOM() (use TABLESAMPLE or application-level randomization)SELECT DISTINCT masking a join problemWHERE on UPDATE/DELETE (always verify with SELECT first)OFFSET for deep pagination (use keyset/cursor pagination instead)Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.