From partme-ai-full-stack-skills
Handles three.js audio spatialization: attaches AudioListener to camera, uses Audio/PositionalAudio sources, AudioAnalyser for FFT data, Web Audio API integration. For 3D sound placement, panner config, audio viz.
npx claudepluginhub partme-ai/full-stack-skills --plugin t2ui-skillsThis skill uses the workspace's default tool permissions.
**ALWAYS use this skill when the user mentions:**
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
ALWAYS use this skill when the user mentions:
AudioListener on Camera, Audio vs PositionalAudio, distance models, refDistance/maxDistance/rolloffAudioAnalyser for visualization bars/spectrumIMPORTANT: audio vs loaders
| Step | Skill |
|---|---|
| Decode mp3/ogg buffer | threejs-loaders (AudioLoader) |
| Spatial playback API | threejs-audio |
Trigger phrases include:
listener.context.state before playback; resume if suspended.refDistance, maxDistance, rolloffFactor, distanceModel per docs.AudioLoader (threejs-loaders), then positionalAudio.setBuffer.listener.context.createAnalyser() pathways per examples; watch performance.import * as THREE from 'three';
const listener = new THREE.AudioListener();
camera.add(listener);
// Validate AudioContext state before attempting playback
function ensureAudioContext() {
if (listener.context.state === 'suspended') {
listener.context.resume();
}
}
// Resume on user gesture (required by browser autoplay policy)
document.addEventListener('click', ensureAudioContext, { once: true });
const sound = new THREE.PositionalAudio(listener);
const loader = new THREE.AudioLoader();
loader.load('sound.mp3', (buffer) => {
sound.setBuffer(buffer);
sound.setRefDistance(20);
sound.setRolloffFactor(1);
});
mesh.add(sound); // Attach to a scene object for spatial positioning
See examples/workflow-positional-audio.md.
| Docs section | Representative links |
|---|---|
| Audio | https://threejs.org/docs/AudioListener.html |
| Audio | https://threejs.org/docs/Audio.html |
| Audio | https://threejs.org/docs/PositionalAudio.html |
| Audio | https://threejs.org/docs/AudioAnalyser.html |
Extended list: references/official-sections.md.
Audio classes are under Audio in three.js docs. Decoding buffers uses AudioLoader—see threejs-loaders. Browser Web Audio policies are external but must be mentioned when AudioContext is suspended.
When answering under this skill, prefer responses that:
AudioListener, PositionalAudio, or AudioAnalyser as relevant.AudioLoader).English: audio, positional audio, listener, analyser, spatial sound, web audio, three.js
中文: 音频、空间音频、AudioListener、PositionalAudio、Web Audio、three.js