From architect
Generate ORM schemas (Prisma, SQLAlchemy, Mongoose, Drizzle) from blueprint shared types and database definitions
npx claudepluginhub navraj007in/architecture-cowork-plugin --plugin architect# /architect:generate-data-model ## Trigger `/architect:generate-data-model` — generate schemas for all databases detected in the SDL (default). `/architect:generate-data-model [databases:X]` — generate schema for one database only. `/architect:generate-data-model [databases:X,Y]` — generate schemas for a named subset. ### Database names Use the database type as the name: | Name | What it generates | |------|------------------| | `postgresql` / `mysql` / `sqlite` | Relational ORM schema (Prisma, Drizzle, SQLAlchemy, etc.) + migrations | | `mongodb` | Mongoose / Motor document schemas...
/architect:generate-data-model — generate schemas for all databases detected in the SDL (default).
/architect:generate-data-model [databases:X] — generate schema for one database only.
/architect:generate-data-model [databases:X,Y] — generate schemas for a named subset.
Use the database type as the name:
| Name | What it generates |
|---|---|
postgresql / mysql / sqlite | Relational ORM schema (Prisma, Drizzle, SQLAlchemy, etc.) + migrations |
mongodb | Mongoose / Motor document schemas |
redis | Key schema definitions + TTL patterns |
elasticsearch / typesense | Index mappings and search schema |
Examples:
/architect:generate-data-model [databases:postgresql]
/architect:generate-data-model [databases:postgresql,redis]
/architect:generate-data-model [databases:mongodb] [non_interactive:true]
When a [databases:...] tag is present, generate schemas only for the named databases and skip all others. When absent, generate for all databases found in the SDL.
After generating a blueprint with /architect:blueprint, this command takes the shared types (deliverable 4d) and database definitions from the manifest and generates real, working ORM schemas. Turns type stubs into Prisma schemas, SQLAlchemy models, Mongoose schemas, or Drizzle tables — with relationships, indexes, enums, and seed data.
| Phase | Steps |
|---|---|
| Setup | Step 1 · Step 2 · Step 3 |
| Generation | Step 3.5 · Step 3.6 · Step 3.7 |
| Completion | Step 4 · Step 5 · Step 5.5 |
First, check for architecture-output/_state.json. If it exists, read it in full and extract:
project.name → product name for displaytech_stack → ORM, database type, and framework — use to pre-fill Step 2 ORM question and skip asking if already obvious (e.g. tech_stack.backend includes "Prisma" → pre-select Prisma)entities → if present, these are already-generated entity summaries; do NOT regenerate ORM schemas that duplicate them — but if the user is running this command, they want fresh schema files regardlesscomponents → component directory names for knowing where to place schema filesThen, check if a blueprint with shared types (deliverable 4d) and database definitions exists earlier in the conversation.
Detect mode — CRITICAL:
Check for evidence of a scaffolded codebase:
solution.sdl.yaml exist with a populated domain.entities[] section?package.json or equivalent build files exist?If NO scaffold evidence (ideation/pre-blueprint mode):
→ Switch to conceptual mode — generate architecture-output/data-model.md only (entity table, relationships, ER diagram). Skip all ORM file generation entirely.
→ Do NOT write prisma/schema.prisma, db/schema.ts, migration files, or seed scripts.
→ Proceed directly to the conceptual output step below — skip Steps 2, 3, 3.5, 3.6, 3.7.
Conceptual mode output (write to architecture-output/data-model.md):
/architect:scaffold sets up your project structure."entities to _state.json with field names extracted from the conceptual model[DATA_MODEL_DONE], log activity, and stopIf scaffold evidence exists (post-scaffold mode): → Continue with Steps 2 onward (full ORM generation).
If no shared types exist and no _state.json with project context, respond:
"I need shared types and database definitions to generate from. Run
/architect:blueprintfirst to define your data model, then come back here to generate ORM schemas."
Ask the user:
"Which ORM should I generate for?"
- Prisma (default for TypeScript) — Type-safe PostgreSQL/MySQL ORM
- Drizzle — Lightweight TypeScript ORM
- SQLAlchemy (default for Python) — Python SQL toolkit
- Mongoose — MongoDB ODM for Node.js
"Which project should I add the schema to?" (path to scaffolded project)
"Should I run the initial migration?" (yes/no — requires database connection)
Before delegating, analyze the data architecture:
Read the data: section from SDL to extract all database definitions
data:
databases:
- name: primary
type: PostgreSQL
entities: [User, Order, Product, Review]
- name: sessions
type: Redis
entities: [Session, Cache]
- name: documents
type: MongoDB
entities: [Document, Comment, Attachment]
Map entities to databases — ensure every entity has a clear home
Plan schema files per database:
schema-relational.prisma (or schema.prisma)schema-document.prisma (if using Prisma) or schema-document.json (schema validation)schema-cache.ts (type definitions only; no schema file)Pass the following to the data-model-generator agent:
After ORM schemas are generated, create native creation scripts for each database type:
For each database in the solution:
db/schema-relational.sql with CREATE TABLE, CREATE INDEX, constraintsCREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
email VARCHAR(255) UNIQUE NOT NULL,
role VARCHAR(50) NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX idx_users_email ON users(email);
1a. SQL Server (Relational):
db/schema-sqlserver.sql with T-SQL CREATE TABLE, CREATE INDEX, constraintsCREATE TABLE users (
id UNIQUEIDENTIFIER PRIMARY KEY DEFAULT NEWID(),
email VARCHAR(255) UNIQUE NOT NULL,
role VARCHAR(50) NOT NULL,
created_at DATETIME DEFAULT GETUTCDATE(),
updated_at DATETIME DEFAULT GETUTCDATE()
);
CREATE INDEX idx_users_email ON users(email);
MongoDB (Document/NoSQL):
db/schema-document.js with collection creation and validatorsdb.createCollection("documents", {
validator: {
$jsonSchema: {
bsonType: "object",
required: ["title", "owner_id"],
properties: {
_id: { bsonType: "objectId" },
title: { bsonType: "string" },
owner_id: { bsonType: "objectId" },
content: { bsonType: "string" },
created_at: { bsonType: "date" }
}
}
}
});
db.documents.createIndex({ owner_id: 1, created_at: -1 });
Redis (Cache/Sessions):
db/schema-cache.ts with TypeScript type definitions and key patterns// Cache schema — no DDL needed, but document key patterns:
// sessions:{sessionId} → Session object (TTL: 24h)
// user-cache:{userId} → User object (TTL: 1h)
// rate-limit:{userId} → counter (TTL: 1m)
Elasticsearch / Search Indexes:
db/schema-search.json with index mappingsPrisma Migrations (if using Prisma as the ORM):
prisma/migrations/001_initial/migration.sql with DDL statementsScripts are grouped in db/ directory:
db/
├── schema-index.md ← Lists all databases and their entities
├── schema-relational.sql ← PostgreSQL/MySQL tables
├── schema-sqlserver.sql ← SQL Server (T-SQL) tables
├── schema-document.js ← MongoDB collections
├── schema-cache.ts ← Redis keys and patterns
├── schema-search.json ← Elasticsearch mappings
└── seed-data.sql ← (Optional) sample inserts for testing
📦 MIGRATION GENERATION: ORM-specific migration files for production schema evolution
After generating schema files, create ORM-specific migration scripts. These are used to evolve the database schema safely over time in development, staging, and production environments.
For each ORM in use:
Prisma (TypeScript/Node.js):
prisma/schema.prisma with datasource, generator, and all model definitionsprisma/migrations/001_initial/migration.sqlpackage.json scripts:
{
"scripts": {
"db:migrate:dev": "prisma migrate dev",
"db:migrate:deploy": "prisma migrate deploy",
"db:generate": "prisma generate",
"db:push": "prisma db push --skip-generate"
}
}
prisma/schema.prisma:
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
generator client {
provider = "prisma-client-js"
}
model User {
id Int @id @default(autoincrement())
email String @unique
name String
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
Alembic (Python/SQLAlchemy):
alembic/ directory with configalembic.ini with database URL placeholderalembic/versions/001_initial.pyMakefile or pyproject.toml:
db-migrate-dev:
alembic upgrade head
db-migrate-new:
alembic revision --autogenerate -m "migration description"
from alembic import op
import sqlalchemy as sa
revision = '001'
down_revision = None
branch_labels = None
depends_on = None
def upgrade():
op.create_table('users',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('email', sa.String(255), unique=True),
sa.Column('name', sa.String(255)),
sa.Column('created_at', sa.DateTime(), default=sa.func.now()),
sa.PrimaryKeyConstraint('id')
)
op.create_index('idx_users_email', 'users', ['email'])
def downgrade():
op.drop_index('idx_users_email')
op.drop_table('users')
Drizzle (TypeScript):
drizzle/ directory with schema filesdrizzle.config.ts with database connectiondrizzle/0001_initial.sqlpackage.json:
{
"scripts": {
"db:generate": "drizzle-kit generate:pg",
"db:migrate": "drizzle-kit migrate"
}
}
EF Core (.NET):
Data/ApplicationDbContext.csData/Migrations/001_Initial.cs.csproj:
<ItemGroup>
<DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="2.0.0" />
</ItemGroup>
dotnet ef database update
dotnet ef migrations add AddUsersTable
Important header in all migration files:
-- WARNING: Verify this migration before running in production.
-- Test in staging environment first.
-- Backup database before applying to production.
-- Generated by /architect:generate-data-model
Directory structure after migrations:
<project>/
├── prisma/ (Prisma)
│ ├── schema.prisma
│ └── migrations/
│ └── 001_initial/migration.sql
├── alembic/ (SQLAlchemy)
│ ├── alembic.ini
│ ├── env.py
│ └── versions/
│ └── 001_initial.py
├── drizzle/ (Drizzle)
│ ├── drizzle.config.ts
│ ├── 0001_initial.sql
│ └── migration.ts
├── Data/Migrations/ (.NET EF Core)
│ └── 001_Initial.cs
└── db/ (Reference scripts)
├── schema-index.md
└── schema-*.sql/js/ts
Data model generated for multi-database architecture!
Databases & Models:
1. PostgreSQL (Relational)
| Model | Fields | Indexes |
|-------|--------|---------|
| User | 6 | email (unique) |
| Order | 7 | userId, status |
| Product | 8 | sellerId, status |
Total: 3 entities, 8 tables
2. SQL Server (Relational)
| Table | Columns | Indexes |
|-------|---------|---------|
| User | 6 | email (unique) |
| Order | 7 | userId, status |
Total: 2 entities, 2 tables
3. MongoDB (Document)
| Collection | Validator | Indexes |
|-----------|-----------|---------|
| Document | 5 fields | owner_id, created_at |
| Comment | 4 fields | document_id |
Total: 2 entities, 2 collections
4. Redis (Cache)
| Key Pattern | TTL | Purpose |
|------------|-----|---------|
| sessions:{id} | 24h | Session storage |
| user-cache:{id} | 1h | User data cache |
Total: 2 cache schemas
Creation scripts generated:
✅ db/schema-relational.sql — PostgreSQL/MySQL DDL (15 statements)
✅ db/schema-sqlserver.sql — SQL Server T-SQL DDL (12 statements)
✅ db/schema-document.js — MongoDB validators + indexes
✅ db/schema-cache.ts — Redis key patterns and TTLs
✅ db/schema-index.md — Entity → database mapping guide
✅ prisma/migrations/001_initial/migration.sql — Prisma migration
Enums: Role (3), OrderStatus (5)
Seed file: db/seed-data.sql (sample inserts)
Next: Run the appropriate script when you set up each database:
• PostgreSQL: psql -U postgres -d mydb < db/schema-relational.sql
• MongoDB: mongosh < db/schema-document.js
• Redis: Manual key patterns (see db/schema-cache.ts for documentation)
After generating ORM schemas, update architecture-output/_state.json with a compact entity summary and database info:
_state.json (or start with {})entities array and write back:{
"entities": [
{ "name": "User", "fields": ["id", "email", "name", "role", "createdAt", "updatedAt"], "owner": "<component>", "database": "postgresql" },
{ "name": "Order", "fields": ["id", "userId", "total", "status", "createdAt", "updatedAt"], "owner": "<component>", "database": "sqlserver" },
{ "name": "Document", "fields": ["_id", "title", "ownerId", "content", "createdAt"], "owner": "<component>", "database": "mongodb" }
],
"databases": [
{ "type": "postgresql", "entity_count": 1, "script": "db/schema-relational.sql" },
{ "type": "sqlserver", "entity_count": 1, "script": "db/schema-sqlserver.sql" },
{ "type": "mongodb", "entity_count": 1, "script": "db/schema-document.js" },
{ "type": "redis", "entity_count": 2, "script": "db/schema-cache.ts" }
]
}
This allows scaffold-component to know entity shapes and which database they live in.
After updating _state.json, silently probe both Confluence and Notion to check which (if any) is connected.
Check Confluence — attempt search_content with a lightweight query (e.g. query: "test", limit: 1):
artifact: "data-model", projectName, spaceKey, parentPageId, projectDirCheck Notion — attempt notion_search with query: "", page_size: 1:
artifact: "data-model", projectName, parentPageId or databaseId, projectDirIf neither MCP server is connected, skip silently.
Append one line to architecture-output/_activity.jsonl:
{"ts":"<ISO-8601>","phase":"generate-data-model","outcome":"completed","files":["db/schema-relational.sql","db/schema-sqlserver.sql","db/schema-document.js","db/schema-cache.ts","db/schema-index.md"],"databases":["postgresql","sqlserver","mongodb","redis"],"entityCount":9,"summary":"Data model generated: 9 entities across 4 databases (PostgreSQL, SQL Server, MongoDB, Redis). Creation scripts ready in db/ directory."}
Include:
files: list all generated schema and script filesdatabases: array of database types (e.g., ["postgresql", "mongodb", "redis"])entityCount: total entities across all databasessummary: one sentence covering all databases, entity count, and statusEmit the completion marker:
[DATA_MODEL_DONE]
This ensures the generate-data-model phase is marked as complete in the project state.
If SDL is missing or does not have a data: section:
"I need an SDL with data model to generate schemas from. Run
/architect:blueprintfirst, then come back here."
If an entity in SDL has no fields or invalid field types:
"entity_X_malformed"Example stub:
// TODO: Verify entity definition in SDL
// Entity: {name} has no fields defined
export interface {name} {
id: string;
// Add fields based on business requirements
}
If <component>/db/ directory cannot be created due to permissions:
If schema files already exist for a database:
schema.prisma or db/schema-*.sql existsdata: section from SDL to identify ALL databases (relational, NoSQL, cache, search)data.databases[].entities mappingschema.prisma or raw SQL/T-SQLdb/ directory:
db/schema-relational.sql — PostgreSQL/MySQL DDLdb/schema-sqlserver.sql — SQL Server T-SQL DDLdb/schema-{database-type}.js or .ts — native validator/definition syntaxdb/schema-index.md — lists all databases and their entitiesschema-relational.sql, SQL Server T-SQL in schema-sqlserver.sql, MongoDB validators in schema-document.jsdb/seed-data.sql or equivalent) with realistic test datadeletedAt DateTime? field + index for relational; deletedAt field + query filter for document storesdomain.entities[] in SDL — authoritative entity inventory. The data: section defines storage infrastructure (which database types exist), not entity names. For field details, cross-reference manifest and any existing migration filesdb/schema-index.md listing which entities are in which database and which service owns them