Apply optimization techniques to improve code performance based on profiling and benchmarking insights. Provides actionable optimization strategies and implementation guidance.
/plugin marketplace add marcel-Ngan/ai-dev-team/plugin install marcel-ngan-ai-dev-team@marcel-Ngan/ai-dev-teamThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Apply optimization techniques to improve code performance based on profiling and benchmarking insights. Provides actionable optimization strategies and implementation guidance.
| Priority | Category | Typical Impact | Effort |
|---|---|---|---|
| 1 | Algorithm | 10x - 1000x | Medium |
| 2 | Data Structure | 2x - 100x | Medium |
| 3 | I/O Optimization | 2x - 50x | Low-Medium |
| 4 | Caching | 2x - 100x | Low |
| 5 | Concurrency | 2x - Nx | High |
| 6 | Micro-optimization | 1.1x - 2x | Low |
## Algorithm Optimization Patterns
### O(n²) → O(n log n): Sorting-based Solution
**Before:**
```typescript
// Find duplicates - O(n²)
function findDuplicates(arr: number[]): number[] {
const dupes = [];
for (let i = 0; i < arr.length; i++) {
for (let j = i + 1; j < arr.length; j++) {
if (arr[i] === arr[j]) dupes.push(arr[i]);
}
}
return dupes;
}
After:
// Find duplicates - O(n)
function findDuplicates(arr: number[]): number[] {
const seen = new Set<number>();
const dupes = new Set<number>();
for (const num of arr) {
if (seen.has(num)) dupes.add(num);
seen.add(num);
}
return [...dupes];
}
Impact: 1000 items: 500ms → 1ms
### Search Optimization
```markdown
### Linear → Binary Search
**Requirement:** Sorted data
**Before:** O(n)
```typescript
function find(arr: number[], target: number): number {
for (let i = 0; i < arr.length; i++) {
if (arr[i] === target) return i;
}
return -1;
}
After: O(log n)
function find(arr: number[], target: number): number {
let left = 0, right = arr.length - 1;
while (left <= right) {
const mid = Math.floor((left + right) / 2);
if (arr[mid] === target) return mid;
if (arr[mid] < target) left = mid + 1;
else right = mid - 1;
}
return -1;
}
Impact: 1M items: 500ms → 0.02ms
---
## Data Structure Optimization
### Choosing Right Structures
| Operation | Array | Object/Map | Set |
|-----------|-------|------------|-----|
| Lookup by index | O(1) | - | - |
| Lookup by key | O(n) | O(1) | O(1) |
| Insert | O(1)/O(n) | O(1) | O(1) |
| Delete | O(n) | O(1) | O(1) |
| Has/Contains | O(n) | O(1) | O(1) |
### Common Transformations
```markdown
### Array to Map for Lookups
**Before:**
```typescript
// O(n) per lookup
users.find(u => u.id === targetId);
After:
// O(1) per lookup after O(n) build
const userMap = new Map(users.map(u => [u.id, u]));
userMap.get(targetId);
Use when: Multiple lookups on same dataset
---
## I/O Optimization
### Database Query Optimization
```markdown
## Database Optimization Patterns
### N+1 Query Problem
**Before:**
```typescript
// N+1 queries
const orders = await Order.findAll();
for (const order of orders) {
order.customer = await Customer.findById(order.customerId);
}
After:
// 2 queries with JOIN or batch
const orders = await Order.findAll({
include: [Customer]
});
// OR
const orders = await Order.findAll();
const customerIds = orders.map(o => o.customerId);
const customers = await Customer.findAll({
where: { id: { in: customerIds } }
});
-- Add index for frequently filtered column
CREATE INDEX idx_orders_customer_id ON orders(customer_id);
-- Composite index for multi-column queries
CREATE INDEX idx_orders_status_date ON orders(status, created_at);
-- Before: Full table scan
SELECT * FROM orders WHERE YEAR(created_at) = 2024;
-- After: Index-friendly
SELECT * FROM orders
WHERE created_at >= '2024-01-01'
AND created_at < '2025-01-01';
### Network Optimization
```markdown
## Network Optimization
### Batch Requests
**Before:**
```typescript
// 100 network calls
for (const id of ids) {
await api.getItem(id);
}
After:
// 1 network call
const items = await api.getItems(ids);
// Enable compression
app.use(compression({
level: 6,
threshold: 1024
}));
// Reuse connections
const pool = new Pool({
max: 20,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 2000
});
---
## Caching Strategies
### Cache Patterns
```markdown
## Caching Optimization
### In-Memory Cache
```typescript
const cache = new Map<string, { data: any; expires: number }>();
async function getCached<T>(
key: string,
fetcher: () => Promise<T>,
ttlMs: number = 60000
): Promise<T> {
const cached = cache.get(key);
if (cached && cached.expires > Date.now()) {
return cached.data;
}
const data = await fetcher();
cache.set(key, { data, expires: Date.now() + ttlMs });
return data;
}
function memoize<T extends (...args: any[]) => any>(fn: T): T {
const cache = new Map();
return ((...args: any[]) => {
const key = JSON.stringify(args);
if (cache.has(key)) return cache.get(key);
const result = fn(...args);
cache.set(key, result);
return result;
}) as T;
}
const expensiveCalc = memoize((n: number) => {
// Complex calculation
});
| Strategy | Use Case | Complexity |
|---|---|---|
| TTL | Time-based expiry | Low |
| LRU | Size-limited cache | Medium |
| Event-based | Data changes | High |
| Versioned | Immutable data | Low |
---
## Concurrency Optimization
### Parallel Processing
```markdown
## Concurrency Patterns
### Promise.all for Independent Operations
**Before:**
```typescript
// Sequential - 3 seconds
const a = await fetchA(); // 1s
const b = await fetchB(); // 1s
const c = await fetchC(); // 1s
After:
// Parallel - 1 second
const [a, b, c] = await Promise.all([
fetchA(),
fetchB(),
fetchC()
]);
async function processWithLimit<T>(
items: T[],
fn: (item: T) => Promise<void>,
limit: number
): Promise<void> {
const executing: Promise<void>[] = [];
for (const item of items) {
const p = fn(item).then(() => {
executing.splice(executing.indexOf(p), 1);
});
executing.push(p);
if (executing.length >= limit) {
await Promise.race(executing);
}
}
await Promise.all(executing);
}
// Process 100 items, max 10 concurrent
await processWithLimit(items, processItem, 10);
---
## Memory Optimization
### Reducing Memory Footprint
```markdown
## Memory Optimization
### Streaming Large Data
**Before:**
```typescript
// Loads entire file into memory
const data = fs.readFileSync('large.json');
const parsed = JSON.parse(data);
After:
// Streams and processes chunks
const stream = fs.createReadStream('large.json');
const parser = JSONStream.parse('*');
stream.pipe(parser).on('data', (item) => {
processItem(item);
});
class ObjectPool<T> {
private pool: T[] = [];
constructor(
private factory: () => T,
private reset: (obj: T) => void,
private maxSize: number = 100
) {}
acquire(): T {
return this.pool.pop() || this.factory();
}
release(obj: T): void {
if (this.pool.length < this.maxSize) {
this.reset(obj);
this.pool.push(obj);
}
}
}
// Clean up event listeners
class Component {
private handler = () => { /* ... */ };
mount() {
eventBus.on('event', this.handler);
}
unmount() {
eventBus.off('event', this.handler);
}
}
---
## Optimization Report Template
```markdown
## Optimization Report
**Target:** {{component}}
**Before Baseline:** {{baseline_metrics}}
**After Optimization:** {{new_metrics}}
### Optimizations Applied
#### 1. Algorithm Improvement
- **Change:** Replaced O(n²) search with O(n) using Map
- **Impact:** 95% reduction in processing time
- **Risk:** Low
#### 2. Database Query Optimization
- **Change:** Resolved N+1 query with JOIN
- **Impact:** 10x fewer database calls
- **Risk:** Low
#### 3. Added Caching Layer
- **Change:** Implemented Redis cache for API responses
- **Impact:** 80% reduction in database load
- **Risk:** Medium (cache invalidation)
### Results
| Metric | Before | After | Improvement |
|--------|--------|-------|-------------|
| Response Time | 450ms | 45ms | 90% |
| Database Queries | 100 | 2 | 98% |
| Memory Usage | 512MB | 128MB | 75% |
| Throughput | 50 rps | 500 rps | 10x |
### Verification
- [x] All tests passing
- [x] Benchmarks within target
- [x] No regression in other areas
- [x] Monitoring in place
performance-profiling - Find what to optimizeperformance-benchmarking - Measure improvementsanalysis-code - Code analysisgithub-pr-creation - Optimization PRsjira-task-management - Track optimization workThis skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.