Analyze an existing PostgreSQL database to identify tables that would benefit from conversion to TimescaleDB hypertables
/plugin marketplace add timescale/pg-aiguide/plugin install pg@aiguideThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Identify tables that would benefit from TimescaleDB hypertable conversion. After identification, use the companion "migrate-postgres-tables-to-hypertables" skill for configuration and migration.
Performance gains: 90%+ compression, fast time-based queries, improved insert performance, efficient aggregations, continuous aggregates for materialization (dashboards, reports, analytics), automatic data management (retention, compression).
Best for insert-heavy patterns:
Requirements: Large volumes (1M+ rows), time-based queries, infrequent updates
-- Get all tables with row counts and insert/update patterns
WITH table_stats AS (
SELECT
schemaname, tablename,
n_tup_ins as total_inserts,
n_tup_upd as total_updates,
n_tup_del as total_deletes,
n_live_tup as live_rows,
n_dead_tup as dead_rows
FROM pg_stat_user_tables
),
table_sizes AS (
SELECT
schemaname, tablename,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) as total_size,
pg_total_relation_size(schemaname||'.'||tablename) as total_size_bytes
FROM pg_tables
WHERE schemaname NOT IN ('information_schema', 'pg_catalog')
)
SELECT
ts.schemaname, ts.tablename, ts.live_rows,
tsize.total_size, tsize.total_size_bytes,
ts.total_inserts, ts.total_updates, ts.total_deletes,
ROUND(CASE WHEN ts.live_rows > 0
THEN (ts.total_inserts::float / ts.live_rows) * 100
ELSE 0 END, 2) as insert_ratio_pct
FROM table_stats ts
JOIN table_sizes tsize ON ts.schemaname = tsize.schemaname AND ts.tablename = tsize.tablename
ORDER BY tsize.total_size_bytes DESC;
Look for:
-- Identify common query dimensions
SELECT schemaname, tablename, indexname, indexdef
FROM pg_indexes
WHERE schemaname NOT IN ('information_schema', 'pg_catalog')
ORDER BY tablename, indexname;
Look for:
-- Check availability
SELECT EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'pg_stat_statements');
-- Analyze expensive queries for candidate tables
SELECT query, calls, mean_exec_time, total_exec_time
FROM pg_stat_statements
WHERE query ILIKE '%your_table_name%'
ORDER BY total_exec_time DESC LIMIT 20;
✅ Good patterns: Time-based WHERE, entity filtering combined with time-based qualifiers, GROUP BY time_bucket, range queries over time ❌ Poor patterns: Non-time lookups with no time-based qualifiers in same query (WHERE email = ...)
-- Check migration compatibility
SELECT conname, contype, pg_get_constraintdef(oid) as definition
FROM pg_constraint
WHERE conrelid = 'your_table_name'::regclass;
Compatibility:
# Append-only logging
INSERT INTO events (user_id, event_time, data) VALUES (...);
# Time-series collection
INSERT INTO metrics (device_id, timestamp, value) VALUES (...);
# Time-based queries
SELECT * FROM metrics WHERE timestamp >= NOW() - INTERVAL '24 hours';
# Time aggregations
SELECT DATE_TRUNC('day', timestamp), COUNT(*) GROUP BY 1;
# Frequent updates to historical records
UPDATE users SET email = ..., updated_at = NOW() WHERE id = ...;
# Non-time lookups
SELECT * FROM users WHERE email = ...;
# Small reference tables
SELECT * FROM countries ORDER BY name;
✅ GOOD:
❌ POOR:
Sequential ID tables can be candidates if:
CREATE TABLE orders (
id BIGSERIAL PRIMARY KEY, -- Can partition by ID
user_id BIGINT,
created_at TIMESTAMPTZ DEFAULT NOW() -- For sparse indexes
);
Note: For ID-based tables where there is also a time column (created_at, ordered_at, etc.),
you can partition by ID and use sparse indexes on the time column.
See the migrate-postgres-tables-to-hypertables skill for details.
✅ Event/Log Tables (user_events, audit_logs)
CREATE TABLE user_events (
id BIGSERIAL PRIMARY KEY,
user_id BIGINT,
event_type TEXT,
event_time TIMESTAMPTZ DEFAULT NOW(),
metadata JSONB
);
-- Partition by id, segment by user_id, enable minmax sparse_index on event_time
✅ Sensor/IoT Data (sensor_readings, telemetry)
CREATE TABLE sensor_readings (
device_id TEXT,
timestamp TIMESTAMPTZ,
temperature DOUBLE PRECISION,
humidity DOUBLE PRECISION
);
-- Partition by timestamp, segment by device_id, minmax sparse indexes on temperature and humidity
✅ Financial/Trading (stock_prices, transactions)
CREATE TABLE stock_prices (
symbol VARCHAR(10),
price_time TIMESTAMPTZ,
open_price DECIMAL,
close_price DECIMAL,
volume BIGINT
);
-- Partition by price_time, segment by symbol, minmax sparse indexes on open_price and close_price and volume
✅ System Metrics (monitoring_data)
CREATE TABLE system_metrics (
hostname TEXT,
metric_time TIMESTAMPTZ,
cpu_usage DOUBLE PRECISION,
memory_usage BIGINT
);
-- Partition by metric_time, segment by hostname, minmax sparse indexes on cpu_usage and memory_usage
❌ Reference Tables (countries, categories)
CREATE TABLE countries (
id SERIAL PRIMARY KEY,
name VARCHAR(100),
code CHAR(2)
);
-- Static data, no time component
❌ User Profiles (users, accounts)
CREATE TABLE users (
id BIGSERIAL PRIMARY KEY,
email VARCHAR(255),
created_at TIMESTAMPTZ,
updated_at TIMESTAMPTZ
);
-- Accessed by ID, frequently updated, has timestamp but it's not the primary query dimension (the primary query dimension is id or email)
❌ Settings/Config (user_settings)
CREATE TABLE user_settings (
user_id BIGINT PRIMARY KEY,
theme VARCHAR(20), -- Changes: light -> dark -> auto
language VARCHAR(10), -- Changes: en -> es -> fr
notifications JSONB, -- Frequent preference updates
updated_at TIMESTAMPTZ
);
-- Accessed by user_id, frequently updated, has timestamp but it's not the primary query dimension (the primary query dimension is user_id)
For each candidate table provide:
Focus on insert-heavy patterns with time-based or sequential access. Tables scoring 8+ points are strong candidates for conversion.
Use when working with Payload CMS projects (payload.config.ts, collections, fields, hooks, access control, Payload API). Use when debugging validation errors, security issues, relationship queries, transactions, or hook behavior.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.