Analyze code for performance characteristics, identify bottlenecks, memory issues, and optimization opportunities through static and dynamic analysis.
/plugin marketplace add marcel-Ngan/ai-dev-team/plugin install marcel-ngan-ai-dev-team@marcel-Ngan/ai-dev-teamThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Analyze code for performance characteristics, identify bottlenecks, memory issues, and optimization opportunities through static and dynamic analysis.
| Aspect | Focus | Indicators |
|---|---|---|
| Algorithm Complexity | Big O analysis | Loop depth, recursion |
| Hot Paths | Frequently executed code | Call frequency |
| Blocking Operations | Synchronous waits | I/O calls, sleep |
| Computation Density | CPU-intensive work | Math, parsing, serialization |
| Aspect | Focus | Indicators |
|---|---|---|
| Allocation Patterns | Object creation | new/malloc frequency |
| Memory Leaks | Unreleased memory | Growing heap, retained refs |
| GC Pressure | Garbage collection | Short-lived objects |
| Memory Footprint | Total usage | Peak memory, baseline |
| Aspect | Focus | Indicators |
|---|---|---|
| Database | Query patterns | N+1, missing indexes |
| Network | API calls | Latency, payload size |
| File System | Disk operations | Read/write patterns |
| Caching | Cache efficiency | Hit ratio, invalidation |
| Aspect | Focus | Indicators |
|---|---|---|
| Event Loop | Blocking detection | Long tasks |
| Promise Chains | Async patterns | Unhandled, chaining |
| Thread Pools | Resource utilization | Pool exhaustion |
| Race Conditions | Concurrency bugs | Data races |
## Static Performance Analysis
### Complexity Analysis
#### Function: processOrders()
**Location:** src/services/order.ts:145
**Complexity Assessment:**
- Time Complexity: O(n²)
- Space Complexity: O(n)
**Analysis:**
```typescript
// Current implementation - O(n²)
orders.forEach(order => {
const customer = customers.find(c => c.id === order.customerId);
// ...
});
Issue: Nested iteration creates quadratic complexity
Optimized Approach:
// Optimized - O(n)
const customerMap = new Map(customers.map(c => [c.id, c]));
orders.forEach(order => {
const customer = customerMap.get(order.customerId);
// ...
});
Impact: 10x improvement at 1000 orders, 100x at 10000 orders
### Dynamic Analysis
```markdown
## Dynamic Performance Profile
### Execution Trace
| Function | Calls | Total Time | Self Time | % |
|----------|-------|------------|-----------|---|
| processOrders | 1 | 2340ms | 45ms | 100% |
| → findCustomer | 1000 | 2100ms | 2100ms | 89.7% |
| → calculateTax | 1000 | 150ms | 150ms | 6.4% |
| → formatOutput | 1000 | 45ms | 45ms | 1.9% |
### Bottleneck Identified
**findCustomer** consumes 89.7% of execution time
### Memory Allocation
| Phase | Allocated | Peak | GC Events |
|-------|-----------|------|-----------|
| Init | 12 MB | 12 MB | 0 |
| Process | 245 MB | 312 MB | 47 |
| Cleanup | 15 MB | 15 MB | 3 |
### GC Analysis
- 47 GC events during processing indicates high allocation rate
- Short-lived objects suggest optimization opportunity
## Performance Profile Report
**Target:** {{target}}
**Date:** {{date}}
**Duration:** {{duration}}
**Environment:** {{environment}}
### Executive Summary
| Metric | Value | Target | Status |
|--------|-------|--------|--------|
| Avg Response | 234ms | <200ms | ⚠️ WARN |
| P95 Latency | 890ms | <500ms | ❌ FAIL |
| Memory Peak | 512MB | <256MB | ❌ FAIL |
| CPU Peak | 78% | <60% | ⚠️ WARN |
### Performance Grade: C (Needs Improvement)
---
### Hotspots Identified
#### Hotspot #1: Database Query Loop
**Location:** src/api/reports.ts:67
**Impact:** HIGH
**Time Consumed:** 67% of request time
**Current Code:**
```typescript
for (const userId of userIds) {
const data = await db.query(
'SELECT * FROM metrics WHERE user_id = ?',
[userId]
);
results.push(data);
}
Issue: N+1 query pattern - 100 users = 100 queries
Recommended Fix:
const data = await db.query(
'SELECT * FROM metrics WHERE user_id IN (?)',
[userIds]
);
const resultMap = groupBy(data, 'user_id');
Expected Improvement: 100 queries → 1 query (95%+ reduction)
Location: src/services/websocket.ts:123 Impact: CRITICAL Memory Growth: 2MB per hour
Issue: Event listener not cleaned up on disconnect
Current Code:
socket.on('connect', () => {
eventBus.on('update', handler);
});
Fixed Code:
socket.on('connect', () => {
eventBus.on('update', handler);
});
socket.on('disconnect', () => {
eventBus.off('update', handler);
});
| Priority | Issue | Effort | Impact |
|---|---|---|---|
| 1 | N+1 query pattern | 2h | High |
| 2 | Memory leak | 1h | Critical |
| 3 | Missing cache | 4h | Medium |
| 4 | Sync file read | 1h | Medium |
| Metric | Current | After Fixes | Improvement |
|---|---|---|---|
| Avg Response | 234ms | ~80ms | 66% faster |
| P95 Latency | 890ms | ~200ms | 77% faster |
| Memory Peak | 512MB | ~128MB | 75% less |
---
## Language-Specific Profiling
### JavaScript/TypeScript
```markdown
### JS/TS Performance Patterns
#### Event Loop Blocking
```typescript
// BAD: Blocks event loop
const data = fs.readFileSync('large-file.json');
// GOOD: Non-blocking
const data = await fs.promises.readFile('large-file.json');
// BAD: Closure retains large object
function createHandler(largeData) {
return () => console.log(largeData.length);
}
// GOOD: Extract only needed value
function createHandler(largeData) {
const length = largeData.length;
return () => console.log(length);
}
### Python
```markdown
### Python Performance Patterns
#### Generator vs List
```python
# BAD: Creates full list in memory
def get_items():
return [process(x) for x in large_dataset]
# GOOD: Generator yields one at a time
def get_items():
for x in large_dataset:
yield process(x)
# BAD: Creates new string each iteration
result = ""
for item in items:
result += str(item)
# GOOD: Join is optimized
result = "".join(str(item) for item in items)
### Java
```markdown
### Java Performance Patterns
#### Object Pool for Expensive Objects
```java
// BAD: Creates new connection each time
public Data fetch() {
Connection conn = new Connection();
return conn.query();
}
// GOOD: Use connection pool
public Data fetch() {
Connection conn = connectionPool.acquire();
try {
return conn.query();
} finally {
connectionPool.release(conn);
}
}
---
## Agents Using This Skill
- **Software Architect** - Performance architecture
- **Senior Developer** - Code optimization
- **DevOps Engineer** - Runtime profiling
## Related Skills
- `performance-benchmarking` - Performance measurement
- `performance-optimization` - Optimization implementation
- `analysis-code` - General code analysis
## MCP Tools Used
- None (analysis skill)
- Results documented via `confluence-technical-docs`
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.