A specialized AI agent that performs comprehensive database schema reflection, analyzes existing database structures, and generates optimized SQLAlchemy model definitions with proper relationships, constraints, and performance optimizations.
Generates optimized SQLAlchemy models from database schemas with performance analysis and relationship mapping.
/plugin marketplace add kivo360/claude-toolbelt/plugin install asyncpg-to-sqlalchemy-converter@claude-toolbeltA specialized AI agent that performs comprehensive database schema reflection, analyzes existing database structures, and generates optimized SQLAlchemy model definitions with proper relationships, constraints, and performance optimizations.
For generating models from existing databases:
# Reflect entire database
/agent:schema-reflector reflect --connection-string $DATABASE_URL --output ./models/
# Reflect specific schema
/agent:schema-reflector reflect --schema public --output ./models/base.py
# Reflect with Supabase optimizations
/agent:schema-reflector reflect --supabase --rls-aware --output ./models/supabase.py
For updating existing models when schema changes:
# Update existing models
/agent:schema-reflector update --existing-models ./models/ --connection-string $DATABASE_URL
# Generate migration scripts
/agent:schema-reflector generate-migration --from-schema ./current_schema.json --to-schema ./new_schema.json
For performance tuning and optimization:
# Analyze performance issues
/agent:schema-reflector analyze-performance --connection-string $DATABASE_URL --report
# Suggest optimizations
/agent:schema-reflector optimize --connection-string $DATABASE_URL --recommendations
# Generate indexing strategy
/agent:schema-reflector indexing-strategy --query-log ./slow_queries.log
# Generated model with relationships
class User(Base):
__tablename__ = "users"
id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)
email = Column(String(255), unique=True, nullable=False, index=True)
created_at = Column(DateTime(timezone=True), server_default=func.now())
# Optimized relationships
profiles = relationship("Profile", back_populates="user", lazy="selectin")
posts = relationship("Post", back_populates="author", lazy="dynamic")
## Database Schema Documentation
### Users Table
- **Purpose**: User authentication and profile management
- **Primary Key**: UUID (auto-generated)
- **Indexes**: Unique index on email, created_at for sorting
- **Relationships**: One-to-many with profiles and posts
- **Constraints**: Email must be valid email format
- **Business Logic**: Users can have multiple profiles for different contexts
{
"performance_analysis": {
"query_patterns": {
"frequent_queries": [
"SELECT * FROM users WHERE email = ?",
"SELECT users.*, profiles.* FROM users JOIN profiles ON users.id = profiles.user_id"
],
"recommendations": [
"Add composite index on (email, created_at)",
"Implement query result caching for user lookups"
]
},
"bottlenecks": [
{
"table": "posts",
"issue": "Missing index on author_id for frequent joins",
"solution": "Add index on posts.author_id"
}
]
}
}
# Alembic migration script
def upgrade():
# Add new column
op.add_column('users', sa.Column('last_login', sa.DateTime(timezone=True), nullable=True))
# Create index for performance
op.create_index('ix_users_email_created', 'users', ['email', 'created_at'], unique=False)
def downgrade():
op.drop_index('ix_users_email_created', table_name='users')
op.drop_column('users', 'last_login')
# Custom type mapping configuration
TYPE_MAPPINGS = {
"custom_enum": "sqlalchemy.Enum",
"vector": "pgvector.Vector",
"tsvector": "sqlalchemy.dialects.postgresql.TSVECTOR"
}
# Configure optimal loading strategies
RELATIONSHIP_CONFIG = {
"selectin": "small_result_sets",
"joined": "always_needed",
"subquery": "large_result_sets",
"dynamic": "large_collections"
}
# Custom optimization rules
OPTIMIZATION_RULES = {
"index_foreign_keys": True,
"add_composite_indexes": True,
"optimize_date_queries": True,
"cache_frequent_lookups": True
}
Expert in monorepo architecture, build systems, and dependency management at scale. Masters Nx, Turborepo, Bazel, and Lerna for efficient multi-project development. Use PROACTIVELY for monorepo setup, build optimization, or scaling development workflows across teams.
Expert backend architect specializing in scalable API design, microservices architecture, and distributed systems. Masters REST/GraphQL/gRPC APIs, event-driven architectures, service mesh patterns, and modern backend frameworks. Handles service boundary definition, inter-service communication, resilience patterns, and observability. Use PROACTIVELY when creating new backend services or APIs.
Build scalable data pipelines, modern data warehouses, and real-time streaming architectures. Implements Apache Spark, dbt, Airflow, and cloud-native data platforms. Use PROACTIVELY for data pipeline design, analytics infrastructure, or modern data stack implementation.