From harness-claude
Implements Web Workers for off-main-thread computation: dedicated workers, Comlink RPC, SharedArrayBuffer sharing, pooling, React/bundler patterns. Use for CPU tasks like sorting, filtering, image processing to prevent UI jank.
npx claudepluginhub intense-visions/harness-engineering --plugin harness-claudeThis skill uses the workspace's default tool permissions.
> Master Web Workers for off-main-thread computation — dedicated workers for CPU-intensive tasks, Comlink for ergonomic worker communication, SharedArrayBuffer for zero-copy data sharing, worker pooling for throughput, and integration patterns with React and bundlers.
Offloads CPU-intensive tasks to Node.js worker threads using MessageChannel, shared buffers, pools, and SharedArrayBuffer for parallel computation without blocking the event loop.
Optimizes React performance using React.memo for component memoization, custom prop comparisons, and useMemo for expensive computations like filtering and sorting. Use for preventing unnecessary re-renders.
Implements Web Workers and Node.js worker_threads in Bun for parallel processing, transferable objects, shared memory, and worker pools.
Share bugs, ideas, or general feedback.
Master Web Workers for off-main-thread computation — dedicated workers for CPU-intensive tasks, Comlink for ergonomic worker communication, SharedArrayBuffer for zero-copy data sharing, worker pooling for throughput, and integration patterns with React and bundlers.
Create a dedicated worker for CPU-intensive tasks. Move computation off the main thread:
// worker.ts — runs in a separate thread
self.addEventListener('message', (event) => {
const { type, data } = event.data;
switch (type) {
case 'sort': {
const sorted = data.sort((a, b) => a.score - b.score);
self.postMessage({ type: 'sorted', data: sorted });
break;
}
case 'filter': {
const filtered = data.filter((item) => item.name.toLowerCase().includes(event.data.query));
self.postMessage({ type: 'filtered', data: filtered });
break;
}
}
});
// main.ts — UI thread
const worker = new Worker(new URL('./worker.ts', import.meta.url), {
type: 'module',
});
worker.addEventListener('message', (event) => {
const { type, data } = event.data;
if (type === 'sorted') {
renderSortedList(data);
}
});
// Send work to the worker (non-blocking)
worker.postMessage({ type: 'sort', data: largeDataset });
Use Comlink for ergonomic worker communication. Comlink wraps postMessage with an RPC-style API:
// worker.ts — expose functions via Comlink
import * as Comlink from 'comlink';
const api = {
async processData(items: Item[]): Promise<ProcessedItem[]> {
// Heavy computation runs off main thread
return items.map((item) => ({
...item,
score: calculateComplexScore(item),
rank: determineRank(item),
}));
},
async search(items: Item[], query: string): Promise<Item[]> {
// Full-text search with ranking
return items
.filter((item) => fuzzyMatch(item.name, query))
.sort((a, b) => relevanceScore(b, query) - relevanceScore(a, query));
},
};
Comlink.expose(api);
// main.ts — call worker functions like normal async functions
import * as Comlink from 'comlink';
const worker = new Worker(new URL('./worker.ts', import.meta.url), {
type: 'module',
});
const api = Comlink.wrap<typeof import('./worker').api>(worker);
// Looks like a regular async function call
const processed = await api.processData(largeDataset);
const results = await api.search(items, 'query');
Use Transferable objects for zero-copy data transfer. Large ArrayBuffers can be transferred to workers without copying:
// Transfer (zero-copy) — ownership moves to worker, original is detached
const buffer = new ArrayBuffer(1024 * 1024); // 1MB
worker.postMessage({ buffer }, [buffer]);
// buffer.byteLength is now 0 — ownership transferred
// Structured clone (copy) — default behavior, copies data
worker.postMessage({ data: largeArray });
// Both threads have their own copy — 2x memory usage
// Transfer ImageBitmap for image processing
const bitmap = await createImageBitmap(imageBlob);
worker.postMessage({ bitmap }, [bitmap]);
Implement a worker pool for parallel throughput. Create navigator.hardwareConcurrency workers at startup. Maintain a busy set and task queue. When a task arrives, dispatch to an idle worker or enqueue. On completion, resolve the promise and process the next queued task:
class WorkerPool {
private workers: Worker[] = [];
private queue: Array<{ task: any; resolve: Function; reject: Function }> = [];
private busy = new Set<Worker>();
constructor(workerUrl: URL, poolSize = navigator.hardwareConcurrency || 4) {
for (let i = 0; i < poolSize; i++) {
const w = new Worker(workerUrl, { type: 'module' });
w.addEventListener('message', (e) => this.onComplete(w, e.data));
this.workers.push(w);
}
}
exec(task: any): Promise<any> {
return new Promise((resolve, reject) => {
const idle = this.workers.find((w) => !this.busy.has(w));
if (idle) {
this.dispatch(idle, task, resolve, reject);
} else {
this.queue.push({ task, resolve, reject });
}
});
}
private dispatch(w: Worker, task: any, resolve: Function, reject: Function) {
this.busy.add(w);
(w as any).__resolve = resolve;
w.postMessage(task);
}
private onComplete(w: Worker, result: any) {
(w as any).__resolve(result);
this.busy.delete(w);
const next = this.queue.shift();
if (next) this.dispatch(w, next.task, next.resolve, next.reject);
}
terminate() {
this.workers.forEach((w) => w.terminate());
}
}
Use SharedArrayBuffer for real-time shared state. Requires COOP (same-origin) and COEP (require-corp) headers. Create a SharedArrayBuffer, wrap in Int32Array, and send to multiple workers. Use Atomics.add/load/store for thread-safe reads and writes, and Atomics.wait/notify for synchronization. This avoids all serialization overhead for numeric data.
Integrate workers with React. Create a useWorker hook: instantiate the worker in useEffect, return { result, loading, execute }, and terminate on cleanup. This manages lifecycle and prevents leaks:
function useWorker<T>(workerFactory: () => Worker) {
const workerRef = useRef<Worker | null>(null);
const [result, setResult] = useState<T | null>(null);
const [loading, setLoading] = useState(false);
useEffect(() => {
workerRef.current = workerFactory();
workerRef.current.addEventListener('message', (e) => {
setResult(e.data);
setLoading(false);
});
return () => workerRef.current?.terminate();
}, []);
const execute = useCallback((data: any) => {
setLoading(true);
workerRef.current?.postMessage(data);
}, []);
return { result, loading, execute };
}
Configure bundlers for worker support. Vite supports import MyWorker from './worker?worker' or the standard new URL('./worker.ts', import.meta.url) pattern. Webpack 5 and esbuild also support the new URL() pattern natively (worker-loader is no longer needed).
Worker creation takes ~40-100ms; each consumes ~1-5MB for its V8 isolate. Structured clone serialization runs at ~400MB/s for typed arrays, ~50MB/s for complex objects. A 10MB JSON dataset takes ~200ms to serialize, potentially negating the benefit. Use Transferable objects or SharedArrayBuffer to avoid copy cost.
A dedicated worker parses the binary file format, computes layout constraints, and generates render commands sent to the main thread via Transferable ArrayBuffers for WebGL submission. Result: opening 10,000+ layer files does not block UI; INP stays under 50ms.
Cell recalculation (dependency graph traversal + formula evaluation) runs in a worker pool. SharedArrayBuffer stores the cell value grid so all workers read current state without serialization. Result: responsive typing and scrolling even during heavy recalculation.
Moving trivial computation to workers. If the computation takes <5ms, the overhead of postMessage serialization (~1ms) and worker context switching exceeds the benefit. Only offload computation that takes >50ms on the main thread.
Creating a new worker per task. Worker creation takes ~50ms. Reuse workers by sending new tasks via postMessage. Create workers at application startup, not on demand.
Sending large objects via postMessage without Transferable. Sending a 50MB ArrayBuffer via structured clone takes ~125ms and doubles memory usage. Use Transferable objects (postMessage(data, [buffer])) for zero-copy transfer.
Ignoring worker errors. Uncaught errors in workers are silently swallowed by default. Always add onerror and onmessageerror handlers to workers for debugging and resilience.