Use when optimizing code for performance, reducing bundle size, improving load times, or fixing performance bottlenecks. Emphasizes measurement-driven optimization.
Systematically identifies and fixes performance bottlenecks through measurement-driven profiling. Triggers when optimizing code, reducing bundle sizes, or improving load times—always measuring before and after changes to prove impact.
/plugin marketplace add TheBushidoCollective/han/plugin install jutsu-scratch@hanThis skill is limited to using the following tools:
Systematic approach to identifying and fixing performance issues.
Measure, don't guess. Optimization without data is guesswork.
❌ NEVER optimize without measuring first
Why: Premature optimization wastes time on non-issues while missing real problems.
Exception: Obvious O(n²) algorithms when O(n) alternatives exist.
Before touching any code, establish metrics:
Frontend Performance:
# Chrome DevTools Performance tab
# Lighthouse audit
npm run build && du -sh dist/ # Bundle size
Backend Performance:
# Add timing logs
start = Time.now
result = expensive_operation()
elapsed = Time.now - start
Logger.info("Operation took #{elapsed}ms")
Database:
# PostgreSQL
EXPLAIN ANALYZE SELECT ...;
# Check query time in logs
grep "SELECT" logs/production.log | grep "Duration:"
Metrics to capture:
Don't guess where the problem is - profile:
Browser Profiling:
Server Profiling:
# Add detailed timing
defmodule Profiler do
def measure(label, func) do
start = System.monotonic_time(:millisecond)
result = func.()
elapsed = System.monotonic_time(:millisecond) - start
Logger.info("#{label}: #{elapsed}ms")
result
end
end
# Use it
Profiler.measure("Database query", fn ->
Repo.all(User)
end)
React Profiling:
# React DevTools Profiler
# Look for:
# - Unnecessary re-renders
# - Slow components (> 16ms for 60fps)
# - Large component trees
Common performance issues:
Frontend:
Backend:
Database:
One change at a time - Measure impact of each change
Bundle Size Reduction:
// Before: Import entire library
import _ from 'lodash'
// After: Import only what's needed
import debounce from 'lodash/debounce'
// Or: Use native alternatives
const unique = [...new Set(array)] // Instead of _.uniq(array)
React Performance:
// Before: Re-renders on every parent render
function ChildComponent({ items }) {
return <div>{items.map(...)}</div>
}
// After: Only re-render when items change
const ChildComponent = React.memo(function ChildComponent({ items }) {
return <div>{items.map(...)}</div>
}, (prev, next) => prev.items === next.items)
// Before: Recreates function every render
function Parent() {
const handleClick = () => { ... }
return <Child onClick={handleClick} />
}
// After: Stable function reference
function Parent() {
const handleClick = useCallback(() => { ... }, [])
return <Child onClick={handleClick} />
}
Code Splitting:
// Before: All in main bundle
import HeavyComponent from './HeavyComponent'
// After: Lazy load when needed
const HeavyComponent = React.lazy(() => import('./HeavyComponent'))
function App() {
return (
<Suspense fallback={<Loading />}>
<HeavyComponent />
</Suspense>
)
}
Image Optimization:
// Before: Full-size image
<img src="/hero.jpg" />
// After: Responsive, lazy-loaded
<img
src="/hero-800w.webp"
srcSet="/hero-400w.webp 400w, /hero-800w.webp 800w"
loading="lazy"
alt="Hero image"
/>
N+1 Query Fix:
# Before: N+1 queries (1 for users + N for posts)
users = Repo.all(User)
Enum.map(users, fn user ->
posts = Repo.all(from p in Post, where: p.user_id == ^user.id)
{user, posts}
end)
# After: 2 queries total
users = Repo.all(User) |> Repo.preload(:posts)
Enum.map(users, fn user -> {user, user.posts} end)
Database Indexing:
-- Before: Slow query
SELECT * FROM users WHERE email = 'user@example.com';
-- Seq Scan (5000ms)
-- After: Add index
CREATE INDEX idx_users_email ON users(email);
-- Index Scan (2ms)
Caching:
# Before: Expensive calculation every request
def get_popular_posts do
# Complex aggregation query (500ms)
Repo.all(from p in Post, ...)
end
# After: Cache for 5 minutes
def get_popular_posts do
Cachex.fetch(:app_cache, "popular_posts", fn ->
result = Repo.all(from p in Post, ...)
{:commit, result, ttl: :timer.minutes(5)}
end)
end
Batch Processing:
# Before: Process one at a time
Enum.each(user_ids, fn id ->
user = Repo.get(User, id)
send_email(user)
end)
# After: Batch fetch
users = Repo.all(from u in User, where: u.id in ^user_ids)
Enum.each(users, &send_email/1)
Reduce Complexity:
// Before: O(n²) - nested loops
function findDuplicates(arr: number[]): number[] {
const duplicates = []
for (let i = 0; i < arr.length; i++) {
for (let j = i + 1; j < arr.length; j++) {
if (arr[i] === arr[j] && !duplicates.includes(arr[i])) {
duplicates.push(arr[i])
}
}
}
return duplicates
}
// After: O(n) - single pass with Set
function findDuplicates(arr: number[]): number[] {
const seen = new Set<number>()
const duplicates = new Set<number>()
for (const num of arr) {
if (seen.has(num)) {
duplicates.add(num)
}
seen.add(num)
}
return Array.from(duplicates)
}
ALWAYS measure after optimization:
## Optimization: [What was changed]
### Before
- Load time: 3.2s
- Bundle size: 850KB
- Time to interactive: 4.1s
### Changes
- Lazy loaded HeavyComponent
- Switched to lodash-es for tree shaking
- Added React.memo to ProductList
### After
- Load time: 1.8s (-44%)
- Bundle size: 520KB (-39%)
- Time to interactive: 2.3s (-44%)
### Evidence
```bash
# Before
$ npm run build
dist/main.js 850.2 KB
# After
$ npm run build
dist/main.js 520.8 KB
**Use proof-of-work skill to document evidence**
### 6. Verify Correctness
**Tests must still pass:**
```bash
# Run full test suite
npm test # Frontend
mix test # Backend
# Manual verification
# - Feature still works
# - Edge cases handled
# - No new bugs introduced
// Route-based code splitting
const routes = [
{
path: '/admin',
component: lazy(() => import('./pages/Admin'))
},
{
path: '/dashboard',
component: lazy(() => import('./pages/Dashboard'))
}
]
// Expensive calculation
const ExpensiveComponent = ({ data }) => {
// Only recalculate when data changes
const processedData = useMemo(() => {
return data.map(item => expensiveTransform(item))
}, [data])
return <div>{processedData.map(...)}</div>
}
# Instead of multiple queries
users = Repo.all(User)
posts = Repo.all(Post)
comments = Repo.all(Comment)
# Use join and preload
users =
User
|> join(:left, [u], p in assoc(u, :posts))
|> join(:left, [u, p], c in assoc(p, :comments))
|> preload([u, p, c], [posts: {p, comments: c}])
|> Repo.all()
BAD: Spending hours optimizing function that runs once
GOOD: Optimize the function that runs 10,000 times per page load
Always profile first to find real bottlenecks
BAD: "This might be slow, let me optimize it"
GOOD: "This IS slow (measured 500ms), let me optimize it"
BAD: Replacing `.map()` with `for` loop to save 1ms
GOOD: Reducing bundle size by 200KB to save 1000ms
Focus on high-impact optimizations
BAD: Remove feature to make it faster
GOOD: Keep feature, make implementation faster
Performance should not come at cost of correctness
BAD: "I think this will be faster" [changes code]
GOOD: "Profiler shows this takes 80% of time" [measures, optimizes, measures again]
Performance vs Readability:
// More readable
const result = items
.filter(item => item.active)
.map(item => item.name)
// Faster (one loop instead of two)
const result = []
for (const item of items) {
if (item.active) {
result.push(item.name)
}
}
Question: Is the perf gain worth the readability loss? Profile first.
Performance vs Maintainability:
Always document the trade-off made
Frontend:
# Bundle analysis
npm run build -- --analyze
# Lighthouse audit
npx lighthouse https://example.com --view
# Size analysis
npx webpack-bundle-analyzer dist/stats.json
Backend:
# Database query analysis
EXPLAIN ANALYZE SELECT ...;
# Profile Elixir code
:eprof.start()
:eprof.profile(fn -> YourModule.function() end)
:eprof.stop()
Fast code that's wrong is useless. Correct code that's fast enough is perfect.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.