npx claudepluginhub parhumm/jaan-to --plugin jaan-toThis skill is limited to using the following tools:
> Generate production-quality data model documentation from entity descriptions.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Generate production-quality data model documentation from entity descriptions.
$JAAN_CONTEXT_DIR/tech.md - Tech stack context (CRITICAL: determines database engine and patterns)
#current-stack, #constraints, #patterns$JAAN_CONTEXT_DIR/config.md - Project configuration$JAAN_TEMPLATES_DIR/jaan-to-backend-data-model.template.md - Output template$JAAN_LEARN_DIR/jaan-to-backend-data-model.learn.md - Past lessons (loaded in Pre-Execution)${CLAUDE_PLUGIN_ROOT}/docs/extending/language-protocol.md - Language resolution protocolEntities: $ARGUMENTS
Accepts any of:
If no input provided, ask: "What entities or features should the data model cover?"
MANDATORY — Read and execute ALL steps in: ${CLAUDE_PLUGIN_ROOT}/docs/extending/pre-execution-protocol.md
Skill name: backend-data-model
Execute: Step 0 (Init Guard) → A (Load Lessons) → B (Resolve Template) → C (Offer Template Seeding)
Also read tech context (CRITICAL for this skill):
$JAAN_CONTEXT_DIR/tech.md - Determines database engine, constraints, common patternsRead and apply language protocol: ${CLAUDE_PLUGIN_ROOT}/docs/extending/language-protocol.md
Override field for this skill: language_backend-data-model
Language exception: Generated code output (variable names, code blocks, schemas, SQL, API specs) is NOT affected by this setting and remains in the project's programming language.
ultrathink
Use extended reasoning for:
Analyze the provided input to extract entities:
If entity list:
If PRD reference:
If existing schema:
If feature description:
Build initial understanding:
INPUT SUMMARY
─────────────
Type: {entity-list/prd/schema/description}
Entities: {list of identified entities}
Relationships: {implied relationships}
Constraints: {mentioned constraints}
Unknown: {areas needing clarification}
Ask up to 6 smart questions based on what's unclear from Step 1. Skip questions already answered by the input or tech.md.
Engine question (ask if not in tech.md):
Migration question (always ask): 2. Use AskUserQuestion:
Tenancy question (ask if not in tech.md constraints): 3. Use AskUserQuestion:
Delete question (ask if entities involve user data): 4. Use AskUserQuestion:
Retention question (ask if user data or compliance mentioned): 5. Use AskUserQuestion:
Depth question (always ask): 6. Use AskUserQuestion:
For each entity, apply constraint extraction heuristics:
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/backend-data-model-reference.mdsection "Constraint Extraction Patterns" for uniqueness detection, relationship mapping, CHECK constraint patterns, and NOT NULL defaults.
Critical multi-tenant rule: When multi-tenancy is enabled, every uniqueness constraint must include tenant_id — UNIQUE(tenant_id, email), never UNIQUE(email). This is the most common AI failure in schema generation.
For each entity, determine:
| Attribute | Detail |
|---|---|
| Table name | Plural snake_case (e.g., order_items) |
| PK | bigint (GENERATED ALWAYS AS IDENTITY / AUTO_INCREMENT) or uuid |
| Columns | Name, type (engine-specific), nullable, default, constraints |
| Relationships | Cardinality, FK column, ON DELETE behavior |
| Indexes | Apply ESR rule for composites (Equality → Sort → Range) |
| Constraints | UNIQUE, CHECK, NOT NULL, FK |
For composite indexes, always order columns:
Example: WHERE tenant_id = ? AND status = 'active' AND created_at > ? → Index: (tenant_id, status, created_at)
Present entity map:
ENTITY MAP
──────────
Entity: User
Table: users
PK: id (bigint)
Columns: email (varchar, NOT NULL, UNIQUE), name (varchar, NOT NULL), role (varchar, CHECK), bio (text, nullable), created_at, updated_at
Relations: 1:N → posts, 1:N → comments
Indexes: (email) UNIQUE, (created_at)
Migration: Greenfield — CREATE TABLE
Entity: Post
Table: posts
PK: id (bigint)
Columns: title (varchar, NOT NULL), body (text, NOT NULL), status (varchar, CHECK: draft/published), user_id (bigint, FK, NOT NULL), created_at, updated_at
Relations: N:1 → users, 1:N → comments
Indexes: (user_id), (status, created_at) — ESR: equality then range
Migration: Greenfield — CREATE TABLE
Plan cross-cutting patterns based on Step 2 decisions:
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/backend-data-model-reference.mdsection "Cross-Cutting Concern Patterns" for timestamps, soft deletes, multi-tenancy (incl. RLS template), enum strategy, PK strategy, and naming conventions.
Present the complete analysis summary:
DATA MODEL PLAN
═══════════════
SUMMARY
───────
Engine: {from tech.md or Step 2}
Migration: {Greenfield/Brownfield/Mixed}
Tenancy: {None/Shared+tenant_id/Schema-per-tenant/DB-per-tenant}
Deletes: {Soft/Hard/Archival/Mixed}
Retention: {None/GDPR/TTL/Custom}
Depth: {Production/MVP/Schema}
ENTITIES ({count})
──────────────────
| Entity | Table | Columns | Relationships | Indexes | Constraints |
|--------|-------|---------|---------------|---------|-------------|
| User | users | 6 | 1:N posts, 1:N comments | 3 | 2 UNIQUE, 1 CHECK |
| Post | posts | 7 | N:1 users, 1:N comments | 3 | 1 CHECK |
| ... | ... | ... | ... | ... | ... |
CROSS-CUTTING
─────────────
Timestamps: created_at + updated_at on all tables
Soft Deletes: {enabled/disabled} {+ partial unique indexes if enabled}
Multi-Tenancy: {strategy + tenant_id placement}
Enum Strategy: CHECK on VARCHAR (no native ENUM)
PK Strategy: {bigint/uuid}
Naming: plural snake_case tables, GitLab-style constraint naming
OUTPUT
──────
Folder: $JAAN_OUTPUTS_DIR/backend/data-model/{id}-{slug}/
File: {id}-{slug}.md
Use AskUserQuestion:
Do NOT proceed to Phase 2 without explicit approval.
Read template: $JAAN_TEMPLATES_DIR/jaan-to-backend-data-model.template.md
If tech stack needed, extract sections from tech.md:
#current-stack#constraints#patternsGenerate the document in this order:
1-2 sentences describing: entity count, database engine, key design decisions (tenancy, delete strategy), and migration approach.
Generate Mermaid erDiagram with all entities, relationships, and cardinality.
For each entity, generate:
Column table:
| Column | Type | Nullable | Default | Constraints |
|---|---|---|---|---|
| id | BIGINT GENERATED ALWAYS AS IDENTITY | NO | — | PRIMARY KEY |
| VARCHAR(255) | NO | — | UNIQUE, NOT NULL | |
| status | VARCHAR(20) | NO | 'pending' | CHECK (status IN (...)) |
| user_id | BIGINT | NO | — | FK → users.id ON DELETE CASCADE |
| created_at | TIMESTAMPTZ | NO | now() | — |
| updated_at | TIMESTAMPTZ | NO | now() | — |
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/backend-data-model-reference.mdsection "Engine-Specific Type Rules" for PK, timestamp, JSON, and boolean type mappings per engine.
Indexes table:
| Name | Columns | Type | Rationale |
|---|---|---|---|
| idx_posts_on_user_id | (user_id) | B-tree | FK lookup performance |
| idx_posts_on_status_created | (status, created_at) | B-tree | ESR: equality then range |
Foreign Keys table:
| Column | References | ON DELETE | ON UPDATE |
|---|---|---|---|
| user_id | users.id | CASCADE | CASCADE |
Migration Notes (per table):
CREATE TABLE statementCREATE INDEX CONCURRENTLY, NOT VALID + VALIDATE CONSTRAINT, SET lock_timeoutDocument the patterns chosen in Step 4 with concrete implementation details.
For each composite index, show ESR rationale.
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/backend-data-model-reference.mdsection "Index Strategy Patterns" for engine-specific index types, multi-tenant indexing, partial indexes, and covering index patterns.
Per-table migration classification.
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/backend-data-model-reference.mdsection "Migration Safety Classification" for operation safety levels, methods, and brownfield zero-downtime patterns.
If GDPR: document deletion strategy (hard delete + anonymized audit, crypto-shredding, or separate PII tables). If TTL: document cleanup approach (pg_cron + batched DELETE, partition-based retention, MongoDB TTL indexes, DynamoDB TTL). If legal holds: note legal_holds table pattern.
Apply 5-dimension scoring rubric. Score each dimension and compute weighted overall score.
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/backend-data-model-reference.mdsection "Quality Scorecard Rubric" for dimensions, weights, and check criteria.
Before preview, verify every item in the quality checklist. If any check fails, fix before preview.
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/backend-data-model-reference.mdsection "Quality Check Checklist" for structure, constraints, indexes, anti-patterns, and completeness checks.
Show the complete data model document.
Use AskUserQuestion:
If approved, set up the output structure:
source "${CLAUDE_PLUGIN_ROOT}/scripts/lib/id-generator.sh"
# Define subdomain directory
SUBDOMAIN_DIR="$JAAN_OUTPUTS_DIR/backend/data-model"
mkdir -p "$SUBDOMAIN_DIR"
# Generate next ID
NEXT_ID=$(generate_next_id "$SUBDOMAIN_DIR")
# Create folder and file paths (slug from entity/feature name)
slug="{lowercase-hyphenated-name-max-50-chars}"
OUTPUT_FOLDER="${SUBDOMAIN_DIR}/${NEXT_ID}-${slug}"
MAIN_FILE="${OUTPUT_FOLDER}/${NEXT_ID}-${slug}.md"
Output Configuration
- ID: {NEXT_ID}
- Folder:
$JAAN_OUTPUTS_DIR/backend/data-model/{NEXT_ID}-{slug}/- Main:
{NEXT_ID}-{slug}.md
mkdir -p "$OUTPUT_FOLDER"
Write data model document to main file.
Update subdomain index:
source "${CLAUDE_PLUGIN_ROOT}/scripts/lib/index-updater.sh"
add_to_index \
"$SUBDOMAIN_DIR/README.md" \
"$NEXT_ID" \
"${NEXT_ID}-${slug}" \
"{Data Model Title}" \
"{1-2 sentence executive summary}"
✓ Data model written to:
$JAAN_OUTPUTS_DIR/backend/data-model/{NEXT_ID}-{slug}/{NEXT_ID}-{slug}.md✓ Index updated:$JAAN_OUTPUTS_DIR/backend/data-model/README.md
"Data model generated. Suggested next steps:"
- API contract: Generate OpenAPI spec from this data model:
/jaan-to:backend-api-contract "{entity-list}"- Task breakdown: Generate backend tasks from this data model:
/jaan-to:backend-task-breakdown "{prd-or-feature}"- Review: Have the team review constraint completeness and index strategy
Use AskUserQuestion:
Question: "Any feedback on the generated data model?"
Header: "Feedback"
Options:
Fix now: Update the output file, re-preview, re-write
Learn: Run /jaan-to:learn-add backend-data-model "{feedback}"
Both: Do both
tech.md detection$JAAN_OUTPUTS_DIR path$JAAN_OUTPUTS_DIR/backend/data-model/{id}-{slug}/