Review code for performance issues, N+1 queries, memory leaks, and optimization opportunities.
Analyzes code for performance issues, N+1 queries, memory leaks, and optimization opportunities.
/plugin marketplace add adelabdelgawad/fullstack-agents/plugin install adelabdelgawad-fullstack-agents-plugins-fullstack-agents@adelabdelgawad/fullstack-agentsIdentify performance bottlenecks, inefficient patterns, and optimization opportunities.
/review performance [target]Check for:
Detection patterns:
# N+1 PROBLEM
for user in users:
orders = db.query(Order).filter(Order.user_id == user.id).all() # Query per user!
# SOLUTION
users = db.query(User).options(joinedload(User.orders)).all() # Single query with join
Bash detection:
grep -rn "for.*in.*:\s*$" --include="*.py" -A 10 | grep "query\|execute\|filter"
Check for:
Detection:
# Find filter/where usage
grep -rn "\.filter(\|\.where(" --include="*.py" | head -20
# Check model indexes
grep -rn "index=True\|Index(" --include="*.py"
Check for:
Anti-patterns:
# BAD: Loading all columns
all_users = db.query(User).all()
# GOOD: Load only needed columns
user_names = db.query(User.id, User.name).all()
# BAD: No pagination
all_orders = db.query(Order).all() # Could be millions!
# GOOD: Paginated
orders = db.query(Order).limit(100).offset(page * 100).all()
Check for:
Anti-patterns:
# BAD: Load all into memory
data = list(huge_query.all()) # Could OOM
# GOOD: Stream/iterate
for item in huge_query.yield_per(1000):
process(item)
Check for:
Detection:
# Find sync calls in async functions
grep -rn "async def" --include="*.py" -A 30 | grep "requests\.\|time\.sleep\|open("
Check for:
Detection:
# Check for missing React.memo
grep -rn "export function\|export const" --include="*.tsx" | grep -v "memo\|useMemo\|useCallback"
# Check for inline objects/functions in JSX
grep -rn "onClick={(" --include="*.tsx"
Check for:
## Performance Review Report
**Target:** {scope}
**Date:** {timestamp}
### Summary
| Category | Issues | Impact |
|----------|--------|--------|
| N+1 Queries | 3 | High |
| Missing Indexes | 5 | Medium |
| Memory Issues | 2 | High |
| Blocking Ops | 1 | Medium |
| Caching | 4 | Medium |
### Critical Performance Issues
#### 1. N+1 Query in Order List API
**Location:** `api/v1/orders.py:45`
**Impact:** ~100ms per item, 10 seconds for 100 orders
**Current Code:**
```python
orders = db.query(Order).all()
for order in orders:
order.customer = db.query(Customer).filter(Customer.id == order.customer_id).first()
order.items = db.query(OrderItem).filter(OrderItem.order_id == order.id).all()
Queries Generated: 1 + N + N = 201 queries for 100 orders
Optimized Code:
orders = (
db.query(Order)
.options(
joinedload(Order.customer),
joinedload(Order.items)
)
.all()
)
Queries Generated: 1 query
Expected Improvement: 95%+ reduction in database time
Location: db/models.py - Order model
Column: created_at
Evidence:
# Found 15 queries filtering by created_at
Order.query.filter(Order.created_at >= start_date)
Current: Full table scan Fix:
created_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True),
index=True, # Add this
)
Location: api/v1/reports.py:78
all_transactions = Transaction.query.all() # Could be millions
df = pd.DataFrame([t.to_dict() for t in all_transactions])
Fix: Use streaming or pagination
def stream_transactions():
for batch in Transaction.query.yield_per(1000):
yield batch.to_dict()
Add query logging in development
logging.getLogger('sqlalchemy.engine').setLevel(logging.DEBUG)
Consider read replicas for heavy read operations
Implement response caching for:
/api/v1/products (cache 5 min)/api/v1/categories (cache 1 hour)Designs feature architectures by analyzing existing codebase patterns and conventions, then providing comprehensive implementation blueprints with specific files to create/modify, component designs, data flows, and build sequences