Load PROACTIVELY when task involves database design, schemas, or data access. Use when user says "set up the database", "create a schema", "add a migration", "write a query", or "set up Prisma". Covers schema design and normalization, ORM setup (Prisma, Drizzle), migration workflows, connection pooling, query optimization, indexing strategies, seeding, and transaction patterns for PostgreSQL, MySQL, SQLite, and MongoDB.
Implements database schemas, ORM setup, migrations, and query optimization for PostgreSQL, MySQL, SQLite, and MongoDB.
/plugin marketplace add mgd34msu/goodvibes-plugin/plugin install goodvibes@goodvibes-marketThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/orm-comparison.mdscripts/database-checklist.shscripts/
database-checklist.sh
references/
orm-comparison.md
This skill guides you through implementing database layers in applications, from initial schema design to query optimization. It leverages GoodVibes precision tools and project analysis tools for type-safe, production-ready database implementations.
Use this skill when you need to:
Follow this sequence for database layer implementation:
Before implementing any database changes, understand the current state using the detect_stack analysis tool:
detect_stack:
project_root: "."
categories: ["database", "orm"]
This identifies:
Check project memory for database decisions:
precision_read:
files:
- path: ".goodvibes/memory/decisions.json"
- path: ".goodvibes/memory/patterns.json"
verbosity: minimal
Look for:
If database already exists, map the current schema:
get_database_schema:
project_root: "."
include_relations: true
include_indexes: true
This returns:
If starting fresh, consult the ORM comparison reference to select the appropriate technology stack.
See: references/orm-comparison.md for decision trees.
Key decision factors:
| Factor | Recommendation |
|---|---|
| Type safety priority | Prisma or Drizzle |
| Maximum SQL control | Kysely or Drizzle |
| Document database | Mongoose (MongoDB) |
| Serverless/edge | Drizzle with libSQL/Turso |
| Existing PostgreSQL | Prisma or Drizzle |
| Learning curve | Prisma (best DX) |
Record your decision in memory:
After choosing, document the decision in .goodvibes/memory/decisions.json for future reference.
Identify entities and relationships first:
Entities: User, Post, Comment, Category
Relationships:
- User 1:N Post (author)
- Post N:M Category (through PostCategory)
- Post 1:N Comment
- User 1:N Comment (author)
Create schema files using precision_write:
For Prisma:
precision_write:
files:
- path: "prisma/schema.prisma"
content: |
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
model User {
id String @id @default(cuid())
email String @unique
name String?
posts Post[]
comments Comment[]
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
model Post {
id String @id @default(cuid())
title String
content String
published Boolean @default(false)
author User @relation(fields: [authorId], references: [id])
authorId String
categories Category[]
comments Comment[]
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
@@index([authorId])
@@index([published, createdAt])
}
model Category {
id String @id @default(cuid())
name String @unique
posts Post[]
}
model Comment {
id String @id @default(cuid())
content String
post Post @relation(fields: [postId], references: [id])
postId String
author User @relation(fields: [authorId], references: [id])
authorId String
createdAt DateTime @default(now())
@@index([postId])
@@index([authorId])
}
verbosity: minimal
For Drizzle:
precision_write:
files:
- path: "src/db/schema.ts"
content: |
import { pgTable, text, timestamp, boolean, index } from 'drizzle-orm/pg-core';
import { relations } from 'drizzle-orm';
export const users = pgTable('users', {
id: text('id').primaryKey().$defaultFn(() => crypto.randomUUID()),
email: text('email').notNull().unique(),
name: text('name'),
createdAt: timestamp('created_at').defaultNow().notNull(),
updatedAt: timestamp('updated_at').defaultNow().notNull(),
});
export const posts = pgTable('posts', {
id: text('id').primaryKey().$defaultFn(() => crypto.randomUUID()),
title: text('title').notNull(),
content: text('content').notNull(),
published: boolean('published').default(false).notNull(),
authorId: text('author_id').notNull().references(() => users.id),
createdAt: timestamp('created_at').defaultNow().notNull(),
updatedAt: timestamp('updated_at').defaultNow().notNull(),
}, (table) => ({
authorIdx: index('author_idx').on(table.authorId),
publishedCreatedIdx: index('published_created_idx').on(table.published, table.createdAt),
}));
export const usersRelations = relations(users, ({ many }) => ({
posts: many(posts),
}));
export const postsRelations = relations(posts, ({ one }) => ({
author: one(users, {
fields: [posts.authorId],
references: [users.id],
}),
}));
verbosity: minimal
Schema best practices:
Use appropriate ID strategy:
Add timestamps:
createdAtupdatedAt for mutable entitiesdeletedAt for soft deletesIndex strategically:
Plan for scale:
Create environment configuration:
precision_write:
files:
- path: ".env.example"
content: |
# Database
DATABASE_URL="postgresql://user:password@localhost:5432/dbname"
# For Prisma with connection pooling
# DATABASE_URL="postgresql://user:password@localhost:5432/dbname?pgbouncer=true"
# DIRECT_URL="postgresql://user:password@localhost:5432/dbname"
mode: overwrite
verbosity: minimal
Create database client module:
For Prisma:
precision_write:
files:
- path: "src/lib/db.ts"
content: |
import { PrismaClient } from '@prisma/client';
const globalForPrisma = globalThis as unknown as {
prisma: PrismaClient | undefined;
};
export const db =
globalForPrisma.prisma ??
new PrismaClient({
log:
process.env.NODE_ENV === 'development'
? ['query', 'error', 'warn']
: ['error'],
});
if (process.env.NODE_ENV !== 'production') {
globalForPrisma.prisma = db;
}
verbosity: minimal
For Drizzle:
precision_write:
files:
- path: "src/lib/db.ts"
content: |
import { drizzle } from 'drizzle-orm/postgres-js';
import postgres from 'postgres';
import * as schema from '@/db/schema';
const connectionString = process.env.DATABASE_URL!;
const client = postgres(connectionString, {
max: process.env.NODE_ENV === 'production' ? 10 : 1,
});
export const db = drizzle(client, { schema });
verbosity: minimal
Use precision_exec to run migration commands with expectations:
For Prisma:
precision_exec:
commands:
- cmd: "npx prisma migrate dev --name init"
timeout_ms: 60000
expect:
exit_code: 0
# Note: Prisma outputs progress to stderr; this is expected behavior
- cmd: "npx prisma generate"
expect:
exit_code: 0
verbosity: standard
For Drizzle:
precision_exec:
commands:
- cmd: "npx drizzle-kit generate"
expect:
exit_code: 0
- cmd: "npx drizzle-kit push"
timeout_ms: 60000
expect:
exit_code: 0
verbosity: standard
Migration best practices:
Use the generate_types project tool to generate TypeScript types from your schema:
generate_types:
project_root: "."
source: "database"
output_path: "src/types/db.ts"
This creates type definitions for:
Verify type generation:
precision_exec:
commands:
- cmd: "npm run typecheck"
expect:
exit_code: 0
verbosity: minimal
Start with basic CRUD operations:
precision_write:
files:
- path: "src/db/queries/users.ts"
content: |
import { db } from '@/lib/db';
export async function createUser(data: { email: string; name?: string }) {
return db.user.create({
data,
});
}
export async function getUserById(id: string) {
return db.user.findUnique({
where: { id },
include: {
posts: true,
},
});
}
export async function updateUser(
id: string,
data: { email?: string; name?: string }
) {
return db.user.update({
where: { id },
data,
});
}
export async function deleteUser(id: string) {
return db.user.delete({
where: { id },
});
}
verbosity: minimal
Check for N+1 query patterns using project tools:
get_prisma_operations:
project_root: "."
analyze_performance: true
This identifies:
include or select)Optimize queries:
Use select to limit fields:
db.user.findMany({
select: { id: true, email: true }, // Don't fetch unused fields
});
Eager load relationships:
db.post.findMany({
include: { author: true }, // Prevents N+1
});
Use pagination:
db.post.findMany({
take: 20,
skip: (page - 1) * 20,
});
Add database-level constraints:
@@index([userId, createdAt(sort: Desc)])
For multi-step operations, use transactions:
Prisma:
export async function createPostWithCategories(
postData: { title: string; content: string; authorId: string },
categoryIds: string[]
) {
return db.$transaction(async (tx) => {
const post = await tx.post.create({
data: {
...postData,
categories: {
connect: categoryIds.map((id) => ({ id })),
},
},
});
await tx.user.update({
where: { id: postData.authorId },
data: { updatedAt: new Date() },
});
return post;
});
}
Drizzle:
export async function createPostWithCategories(
postData: { title: string; content: string; authorId: string },
categoryIds: string[]
) {
return db.transaction(async (tx) => {
const [post] = await tx.insert(posts).values(postData).returning();
await tx.insert(postCategories).values(
categoryIds.map((categoryId) => ({
postId: post.id,
categoryId,
}))
);
return post;
});
}
Create seed script for local development:
precision_write:
files:
- path: "prisma/seed.ts"
content: |
import { PrismaClient } from '@prisma/client';
const prisma = new PrismaClient();
async function main() {
// Clear existing data
await prisma.comment.deleteMany();
await prisma.post.deleteMany();
await prisma.user.deleteMany();
await prisma.category.deleteMany();
// Create users
const alice = await prisma.user.create({
data: {
email: 'alice@example.com',
name: 'Alice',
},
});
const bob = await prisma.user.create({
data: {
email: 'bob@example.com',
name: 'Bob',
},
});
// Create categories
const tech = await prisma.category.create({
data: { name: 'Technology' },
});
const news = await prisma.category.create({
data: { name: 'News' },
});
// Create posts
await prisma.post.create({
data: {
title: 'First Post',
content: 'This is the first post',
published: true,
authorId: alice.id,
categories: {
connect: [{ id: tech.id }],
},
},
});
console.log('Database seeded successfully');
}
main()
.catch((e) => {
console.error(e);
process.exit(1);
})
.finally(async () => {
await prisma.$disconnect();
});
verbosity: minimal
Update package.json:
precision_edit:
edits:
- path: "package.json"
find: '"scripts": {'
hints:
near_line: 2
replace: |
"prisma": {
"seed": "tsx prisma/seed.ts"
},
"scripts": {
verbosity: minimal
Run the database checklist script:
./plugins/goodvibes/skills/outcome/database-layer/scripts/database-checklist.sh .
This validates:
Run type checking and tests:
precision_exec:
commands:
- cmd: "npm run typecheck"
expect:
exit_code: 0
- cmd: "npm run test -- db"
expect:
exit_code: 0
verbosity: minimal
Use query_database to verify data integrity:
query_database:
project_root: "."
query: "SELECT COUNT(*) FROM users;"
Add deletedAt field and filter in queries:
model Post {
id String @id
deletedAt DateTime?
}
// Soft delete
await db.post.update({
where: { id },
data: { deletedAt: new Date() },
});
// Query only active records
await db.post.findMany({
where: { deletedAt: null },
});
Use version field to prevent concurrent updates:
model Post {
id String @id
version Int @default(0)
}
await db.post.update({
where: {
id: postId,
version: currentVersion,
},
data: {
title: newTitle,
version: { increment: 1 },
},
});
For serverless environments, use connection pooling:
# PgBouncer
DATABASE_URL="postgresql://user:password@localhost:6543/db?pgbouncer=true"
DIRECT_URL="postgresql://user:password@localhost:5432/db"
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
directUrl = env("DIRECT_URL")
}
PostgreSQL:
@@index([content(ops: raw("gin_trgm_ops"))], type: Gin)
await db.$queryRaw`
SELECT * FROM posts
WHERE to_tsvector('english', content) @@ to_tsquery('search terms')
`;
get_prisma_operations to identify locationinclude or select with relationsdataloader for complex cases?pool_timeout=10)await (connections not released)npx prisma validateAfter implementing the database layer:
For additional reference material and decision trees, see:
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.