From pulp
Build or iterate on Pulp's native Dawn-backed Three.js workflow using the real three.webgpu.js renderer, focused bridge tests, and native demo capture.
npx claudepluginhub danielraffel/pulp --plugin pulpThis skill uses the workspace's default tool permissions.
Use this skill when a task involves Pulp's native Three.js bridge or the
Monitors deployed URLs for regressions after deploys, merges, or upgrades by checking HTTP status, console errors, network failures, performance (LCP/CLS/INP), content, and API health.
Share bugs, ideas, or general feedback.
Use this skill when a task involves Pulp's native Three.js bridge or the
examples/threejs-native-demo workflow.
Pulp now supports a native Three.js lane built on:
three.webgpu.js rendererGPUCanvasContext bridgeWindowHost path with no browser or WebViewSupported now:
examples/threejs-native-demo/main.cpp
cubespectrumparticlesribbonreverbTHREE.WebGPURenderer initializationOrbitControls addon import/init on the native pathtest/web-compat/test_threejs_bridge.cpp--captureNot supported by this skill:
V8 engine required — Three.js needs typed arrays, promises, and full ES module support. Configure with:
cmake -S . -B build -DPULP_JS_ENGINE=v8 \
-DV8_INCLUDE_DIR=/opt/homebrew/opt/node/include/node \
-DV8_LIB_DIR=/opt/homebrew/opt/node/lib \
-DV8_LIBRARY_PATH=/opt/homebrew/opt/node/lib/libnode.141.dylib \
-DPULP_ENABLE_GPU=ON -DPULP_BUILD_TESTS=ON
gpu_surface MUST be passed to WidgetBridge — The native GPU bridge only initializes when WidgetBridge receives a non-null GpuSurface pointer. Without it, Three.js gets no WebGPU device and the 3D canvas renders black. This is the attach_gpu_surface() call in the demo.
Three.js fetched via FetchContent — PULP_HAS_THREEJS is set automatically when GPU + tests are enabled.
Main workflow files:
examples/threejs-native-demo/main.cppexamples/threejs-native-demo/README.mdtest/web-compat/test_threejs_bridge.cppBridge/runtime files often involved:
core/view/src/widget_bridge.cppcore/view/js/web-compat.jscore/view/js/web-compat-canvas.jscore/view/js/web-compat-document.jscore/view/js/web-compat-element.jscore/render/src/gpu_surface_dawn.cppcore/canvas/src/skia_canvas.cppTruth/status docs to keep aligned:
planning/v3-phase14-gap-closure-status.mdplanning/v3-verification-report.mdDefault to the real native stack:
three/webgpuGPUCanvasContextIf the task is about the original Phase 13 acceptance set, prefer extending the existing native demo modes rather than inventing detached throwaway samples.
cmake --build build --target pulp-threejs-native-demo pulp-test-threejs-bridge -j8
If the task is deeper in the bridge layer, also use the lower-level focused proofs as needed:
./build/test/pulp-test-web-compat-prelude "[webcompat][canvas][gpu]"
./build/test/pulp-test-canvas-widget "[canvas_widget][gpu]"
./build/test/pulp-test-skia-surface "[render][skia][readback]"
Examples:
./build/test/pulp-test-threejs-bridge "[threejs][gpu][phase13]"
./build/test/pulp-test-threejs-bridge "[threejs][gpu][phase13][spectrum]"
./build/test/pulp-test-threejs-bridge "[threejs][gpu][phase13][particles]"
./build/test/pulp-test-threejs-bridge "[threejs][gpu][phase13][ribbon]"
./build/test/pulp-test-threejs-bridge "[threejs][gpu][phase13][reverb]"
Do not rerun broad unrelated suites when a focused bridge/demo tag is enough.
./build/examples/threejs-native-demo/pulp-threejs-native-demo --demo spectrum --capture /tmp/pulp-threejs-spectrum.png
./build/examples/threejs-native-demo/pulp-threejs-native-demo --demo particles --capture /tmp/pulp-threejs-particles.png
./build/examples/threejs-native-demo/pulp-threejs-native-demo --demo ribbon --capture /tmp/pulp-threejs-ribbon.png
./build/examples/threejs-native-demo/pulp-threejs-native-demo --demo reverb --capture /tmp/pulp-threejs-reverb.png
Use screenshots to confirm the result is visibly truthful, not just test-green.
When something breaks, do not guess at generic browser APIs.
Instead:
That keeps the bridge aligned to real Three.js usage instead of drifting toward an unfocused browser shim.
For native Three.js work, verify:
THREE.WebGPURenderer still initializesKeep these in sync when the workflow meaningfully changes:
examples/threejs-native-demo/README.mdplanning/v3-phase14-gap-closure-status.mdplanning/v3-verification-report.mdIf the workflow grows stable enough for broader reuse, keep this skill aligned with the actual shipped demo modes and focused validation commands.
When PULP_BENCHMARK=ON, pulp-threejs-native-demo exposes a
headless benchmark that drives the JS→GPU upload path without a
visible window:
pulp-threejs-native-demo --benchmark-seconds=10 --widget=particles \
--particle-count=10000 --target-fps=60 \
--output=planning/bench/particles-N10000.json
The benchmark bypasses the full three.webgpu.js module loader and
calls __gpuQueueDrawBufferedImpl directly with a vertex-buffer
payload shaped like THREE.BufferGeometry.setAttribute('position', new THREE.BufferAttribute(new Float32Array(count * 3), 3)). It
exercises the exact widget_bridge.cpp WriteBuffer path a real
Three.js particles scene would hit.
Gotchas:
PULP_HAS_SKIA. If external/skia-build/ only contains include/
modules/ without build/mac-gpu/lib/, cmake silently sets
PULP_HAS_SKIA=FALSE and every native WebGPU call in
widget_bridge.cpp short-circuits on its #ifndef PULP_HAS_SKIA return false guard. The benchmark harness detects this and fails
fast rather than emitting zero counters.three.webgpu.js module loader has been seen to hang
at status: 'starting' headless on some hosts (module runs to
completion with empty error, but top-level state never transitions
to 'ready'). The benchmark works around this by using a minimal
JS harness; the in-window demo may still hit the hang when invoked
headlessly via --capture. If that happens, fix the loader, don't
paper over it in the harness.base64_decode_us will be zero for the particle benchmark —
that counter only fires on the __gpuComputeDispatchImpl
bufferDataBase64 lane (#535), not the vertex-buffer lane. Don't
read a zero there as a bug.The zero-copy JS↔GPU initiative (#516) ran Decision 1 twice:
ui-preview's oscilloscope +
spectrogram — wrong workload, C++-driven, never exercises the
upload path.Both landed NO-GO; Slice 0.5 supersedes Slice 0's verdict. See
planning/zero-copy-decision-1-re-evaluation-2026-04-20.md. The
new PerfCounters fields (base64_decode_total_us,
gpu_buffer_upload_count, gpu_buffer_bytes_resident_peak) stay
merged for future workload-specific re-evaluations.