From claude-code-toolkit
Optimizes PostgreSQL and MySQL queries via EXPLAIN analysis, indexes (B-tree, GIN, partial), N+1 fixes, connection pooling, and read replicas.
npx claudepluginhub rohitg00/awesome-claude-code-toolkitThis skill uses the workspace's default tool permissions.
Always run `EXPLAIN ANALYZE` before optimizing. Read the output bottom-up.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides MCP server integration in Claude Code plugins via .mcp.json or plugin.json configs for stdio, SSE, HTTP types, enabling external services as tools.
Always run EXPLAIN ANALYZE before optimizing. Read the output bottom-up.
-- PostgreSQL
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT) SELECT ...;
-- MySQL
EXPLAIN ANALYZE SELECT ...;
Key metrics to watch:
ANALYZECREATE INDEX idx_users_email ON users (email);
CREATE INDEX idx_orders_user_date ON orders (user_id, created_at DESC);
Use for: equality, range queries, sorting. Column order matters in composite indexes: put equality columns first, then range/sort columns.
CREATE INDEX idx_orders_pending ON orders (created_at)
WHERE status = 'pending';
Use when queries always filter on a specific condition. Dramatically smaller than full indexes.
CREATE INDEX idx_products_tags ON products USING GIN (tags);
CREATE INDEX idx_docs_search ON documents USING GIN (to_tsvector('english', content));
CREATE INDEX idx_locations_point ON locations USING GiST (coordinates);
CREATE INDEX idx_events_period ON events USING GiST (tsrange(start_at, end_at));
-- PostgreSQL
CREATE INDEX idx_users_email_name ON users (email) INCLUDE (name);
-- MySQL
CREATE INDEX idx_users_email_name ON users (email, name);
Symptom: 1 query to fetch parent + N queries for each child.
# BAD: N+1
users = db.query(User).all()
for user in users:
print(user.orders) # triggers query per user
# GOOD: eager load
users = db.query(User).options(joinedload(User.orders)).all()
// BAD: N+1
const users = await User.findAll();
for (const user of users) {
const orders = await Order.findAll({ where: { userId: user.id } });
}
// GOOD: batch load
const users = await User.findAll({ include: [Order] });
Detection: enable query logging, count queries per request. More than 10 queries for a single endpoint is a red flag.
Rule of thumb: pool_size = (core_count * 2) + disk_count
Typical web app: 10-20 connections per app instance
PostgreSQL:
idle_in_transaction_session_timeout = '30s'pg_stat_activityMySQL:
max_connections based on available RAM (each connection uses ~10MB)SHOW PROCESSLISTSELECT queries to replicas# SQLAlchemy read replica routing
class RoutingSession(Session):
def get_bind(self, mapper=None, clause=None):
if self._flushing or self.is_modified():
return engines["primary"]
return engines["replica"]
-- PostgreSQL
CREATE TABLE events (
id bigint GENERATED ALWAYS AS IDENTITY,
created_at timestamptz NOT NULL,
data jsonb
) PARTITION BY RANGE (created_at);
CREATE TABLE events_2025_q1 PARTITION OF events
FOR VALUES FROM ('2025-01-01') TO ('2025-04-01');
CREATE TABLE events_2025_q2 PARTITION OF events
FOR VALUES FROM ('2025-04-01') TO ('2025-07-01');
CREATE TABLE sessions (
id uuid PRIMARY KEY,
user_id bigint NOT NULL
) PARTITION BY HASH (user_id);
CREATE TABLE sessions_0 PARTITION OF sessions FOR VALUES WITH (MODULUS 4, REMAINDER 0);
CREATE TABLE sessions_1 PARTITION OF sessions FOR VALUES WITH (MODULUS 4, REMAINDER 1);
Partition when tables exceed 50-100GB or when you need to drop old data quickly.
EXPLAIN ANALYZE and read the planidx_scan in pg_stat_user_indexes)SELECT * with specific columnsLIMIT to queries that only need a subsetEXISTS instead of COUNT(*) > 0INSERT/UPDATE operations (500-1000 rows per batch)WHERE clauseslog_min_duration_statement = 100)LIKE '%term%' on unindexed columns (use full-text search instead)ORDER BY RANDOM() (use TABLESAMPLE or application-level randomization)SELECT DISTINCT masking a join problemWHERE on UPDATE/DELETE (always verify with SELECT first)OFFSET for deep pagination (use keyset/cursor pagination instead)