Auto-activate for AlloyDB in GCP configs or docs. Google AlloyDB expertise: PostgreSQL-compatible managed database on GCP. Produces AlloyDB cluster configurations, connection patterns, and columnar engine setups on GCP. Use when: provisioning AlloyDB clusters, configuring read pools, using columnar engine, Private Service Access networking, or migrating from Cloud SQL. Not for AlloyDB Omni (see alloydb-omni) or vanilla PostgreSQL without AlloyDB features.
From flownpx claudepluginhub cofin/flow --plugin flowThis skill uses the workspace's default tool permissions.
references/features.mdreferences/migration.mdreferences/observability.mdreferences/operations.mdreferences/setup.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
AlloyDB is a fully managed, PostgreSQL-compatible database service on Google Cloud. It combines the familiarity of PostgreSQL with Google's storage and compute innovations for high performance and availability.
| Feature | AlloyDB | Cloud SQL for PostgreSQL |
|---|---|---|
| Storage | Disaggregated, log-based | Attached disk |
| Columnar engine | Built-in adaptive columnar cache | Not available |
| ML embeddings | Native Vertex AI integration | Manual setup |
| Read scaling | Read pool (auto-managed replicas) | Manual read replicas |
| Availability | 99.99% SLA (regional) | 99.95% SLA |
| Networking | Private IP only (PSA required) | Public or private IP |
| Action | Command |
|---|---|
| Create cluster | gcloud alloydb clusters create NAME --region=REGION --network=NETWORK --password=PASS |
| Create primary | gcloud alloydb instances create NAME --cluster=CLUSTER --region=REGION --instance-type=PRIMARY --cpu-count=N |
| Create read pool | gcloud alloydb instances create NAME --cluster=CLUSTER --region=REGION --instance-type=READ_POOL --read-pool-node-count=N |
| Connect via proxy | ./alloydb-auth-proxy "projects/P/locations/R/clusters/C/instances/I" --port=5432 |
| Enable columnar engine | SELECT google_columnar_engine_add('table_name'); |
# From GCE VM in same VPC (private IP)
psql "host=ALLOYDB_IP dbname=postgres user=postgres sslmode=require"
# Via AlloyDB Auth Proxy (recommended for external access)
./alloydb-auth-proxy \
"projects/PROJECT/locations/REGION/clusters/CLUSTER/instances/INSTANCE" \
--port=5432
psql "host=127.0.0.1 port=5432 dbname=postgres user=postgres"
<workflow>
AlloyDB requires Private Service Access (PSA) before any cluster can be created. Allocate an IP range and create the VPC peering connection.
Create the cluster with gcloud alloydb clusters create, then add a primary instance. Choose CPU count based on workload (start with 4 vCPUs for small workloads).
For read-heavy workloads, add a read pool with --instance-type=READ_POOL. AlloyDB automatically manages the read replicas within the pool.
For analytical query patterns, enable the columnar engine on tables with SELECT google_columnar_engine_add('table'). Check g_columnar_recommended_columns for automatic recommendations.
Use the AlloyDB Auth Proxy for connections from outside the VPC. For applications within GCE/GKE on the same VPC, connect directly via private IP.
</workflow> <guardrails>--password at cluster creation — there is no way to recover the initial password; store it in Secret Managersslmode=require in connection strings for securityroles/monitoring.viewer and set alerts on CPU, connections, and replication lag before going to productionEXPLAIN (ANALYZE, BUFFERS) on a representative dataset before production deploymentBefore delivering configurations, verify:
default network in productionColumnar engine setup for an analytics workload:
-- Enable columnar engine on the orders table
SELECT google_columnar_engine_add('orders');
-- Check which columns AlloyDB recommends for columnar caching
SELECT table_name, column_name, estimated_benefit
FROM g_columnar_recommended_columns
ORDER BY estimated_benefit DESC;
-- Verify a query uses the columnar engine
EXPLAIN (ANALYZE) SELECT region, SUM(amount)
FROM orders
WHERE order_date >= '2025-01-01'
GROUP BY region;
-- Look for "Columnar Scan" in the plan output
Connection string for a Python application using the Auth Proxy:
DATABASE_URL = "postgresql+asyncpg://postgres:password@127.0.0.1:5432/mydb"
# Auth Proxy runs locally, forwarding to AlloyDB private IP
</example>
AlloyDB metrics are available under alloydb.googleapis.com/database/postgresql/* in Cloud Monitoring. Enable Cloud Monitoring before production launch.
Key metrics to watch:
max_connections (200 on pg18)PromQL patterns (Cloud Monitoring / Google Managed Prometheus):
# CPU utilization
avg_over_time(alloydb_googleapis_com:database_postgresql_cpu_utilization[5m])
# Active connections vs capacity
alloydb_googleapis_com:database_postgresql_network_connections
# Replication lag
max by (instance_id)(alloydb_googleapis_com:database_postgresql_replication_replica_lag_seconds)
Required role: roles/monitoring.viewer. See Observability Reference for full PromQL patterns, alert policy examples, and dashboard recommendations.
Before promoting any query to production, validate with EXPLAIN (ANALYZE, BUFFERS). Monitor live workload via pg_stat_activity.
Quick patterns:
-- Active queries with duration
SELECT pid, now() - query_start AS duration, state, query
FROM pg_stat_activity
WHERE state != 'idle'
ORDER BY duration DESC;
-- Tables with bloat
SELECT relname, n_dead_tup, n_live_tup, last_autovacuum
FROM pg_stat_user_tables
WHERE n_dead_tup > 10000
ORDER BY n_dead_tup DESC;
See Operations Reference for EXPLAIN ANALYZE interpretation, bloat detection, autovacuum tuning, invalid index detection, and security hardening.
Run alloydb-auth-proxy as a sidecar container alongside the application pod. The sidecar uses the pod's workload identity (requires roles/alloydb.client) and refreshes IAM tokens automatically. The application connects to 127.0.0.1:5432 with no credential management in the app layer.
Store the database password in Secret Manager. On rotation: add a new secret version, update the AlloyDB user password via gcloud alloydb users set-password, perform a rolling restart with exponential backoff, then disable the old secret version.
Set max_connections=200 for PostgreSQL 18 instances as the production baseline. For workloads exceeding 200 concurrent connections, add PgBouncer in transaction mode rather than raising the limit further.
See Operations Reference for the full Kubernetes sidecar spec, rotation runbook, and connection pooling guidance.
AlloyDB continuous backup enables PITR to any second within the retention window (default 14 days). Restoration creates a new cluster — the original cluster is unaffected.
gcloud alloydb clusters restore RESTORED_CLUSTER_ID \
--region=REGION \
--network=NETWORK \
--source-cluster=projects/PROJECT_ID/locations/REGION/clusters/CLUSTER_ID \
--point-in-time="2025-06-15T14:30:00Z"
When the primary region is unavailable, promote the secondary cluster with:
gcloud alloydb clusters promote SECONDARY_CLUSTER_ID --region=SECONDARY_REGION
After promotion, update connection strings (via Secret Manager or environment config) to the promoted cluster endpoint. See Operations Reference for the full failover runbook and checklist.
For AlloyDB tooling in Gemini CLI (34 tools: cluster management, query analysis, IAM, backups, and more):
gemini extensions install https://github.com/gemini-cli-extensions/alloydb
For PromQL metric templates, alert policy generators, and dashboard scaffolding:
gemini extensions install https://github.com/gemini-cli-extensions/alloydb-observability
For detailed guides and code examples, refer to the following documents in references/: