From runway-api
Integrates RunwayML SDK for audio APIs (TTS, sound effects, voice isolation, dubbing) into Node.js and Python server-side code.
npx claudepluginhub runwayml/skillsThis skill is limited to using the following tools:
> **PREREQUISITE:** Run `+rw-check-compatibility` first. Run `+rw-fetch-api-reference` to load the latest API reference before integrating. Requires `+rw-setup-api-key` for API credentials. Requires `+rw-integrate-uploads` for local audio/video files.
Generates audio via Runway API Python scripts for TTS, sound effects, voice isolation, dubbing, and voice conversion. Run with uv from working directory for local output files.
Implements ElevenLabs APIs for speech-to-speech voice conversion, text-to-sound-effects, audio noise removal, and transcription. For voice changing, SFX generation, or audio cleanup tasks.
Generates TTS, music, sound effects, and voice clones via ElevenLabs and fal.ai MCP tools. Use for narrating text, voice-overs, soundtracks, or SFX in video projects.
Share bugs, ideas, or general feedback.
PREREQUISITE: Run
+rw-check-compatibilityfirst. Run+rw-fetch-api-referenceto load the latest API reference before integrating. Requires+rw-setup-api-keyfor API credentials. Requires+rw-integrate-uploadsfor local audio/video files.
Help users add Runway audio generation to their server-side code.
| Model | Endpoint | Use Case | Cost |
|---|---|---|---|
eleven_multilingual_v2 | POST /v1/text_to_speech | Text to speech | 1 credit/50 chars |
eleven_text_to_sound_v2 | POST /v1/sound_effect | Sound effect generation | 1-2 credits |
eleven_voice_isolation | POST /v1/voice_isolation | Isolate voice from audio | 1 credit/6 sec |
eleven_voice_dubbing | POST /v1/voice_dubbing | Dub audio to other languages | 1 credit/2 sec |
eleven_multilingual_sts_v2 | POST /v1/speech_to_speech | Voice conversion | 1 credit/3 sec |
Generate speech from text using the ElevenLabs multilingual model.
import RunwayML from '@runwayml/sdk';
const client = new RunwayML();
const task = await client.textToSpeech.create({
model: 'eleven_multilingual_v2',
promptText: 'Hello, welcome to our application!',
voice: { type: 'runway-preset', presetId: 'Maya' }
}).waitForTaskOutput();
const audioUrl = task.output[0];
from runwayml import RunwayML
client = RunwayML()
task = client.text_to_speech.create(
model='eleven_multilingual_v2',
prompt_text='Hello, welcome to our application!',
voice={ 'type': 'runway-preset', 'presetId': 'Maya' }
).wait_for_task_output()
audio_url = task.output[0]
Generate sound effects from text descriptions.
const task = await client.soundEffect.create({
model: 'eleven_text_to_sound_v2',
promptText: 'Thunder rolling across a stormy sky'
}).waitForTaskOutput();
task = client.sound_effect.create(
model='eleven_text_to_sound_v2',
prompt_text='Thunder rolling across a stormy sky'
).wait_for_task_output()
Extract voice from audio with background noise.
// If using a local file, upload first
const upload = await client.uploads.createEphemeral(
fs.createReadStream('/path/to/noisy-audio.mp3')
);
const task = await client.voiceIsolation.create({
model: 'eleven_voice_isolation',
audioUri: upload.runwayUri
}).waitForTaskOutput();
Dub audio/video into other languages.
const task = await client.voiceDubbing.create({
model: 'eleven_voice_dubbing',
audioUri: 'https://example.com/speech.mp3',
targetLang: 'es' // Spanish
}).waitForTaskOutput();
Convert one voice to another.
const task = await client.speechToSpeech.create({
model: 'eleven_multilingual_sts_v2',
media: { type: 'audio', uri: 'https://example.com/original-speech.mp3' },
voice: { type: 'runway-preset', presetId: 'Noah' }
}).waitForTaskOutput();
import RunwayML from '@runwayml/sdk';
import express from 'express';
const client = new RunwayML();
const app = express();
app.use(express.json());
app.post('/api/text-to-speech', async (req, res) => {
try {
const { text, voiceId } = req.body;
const task = await client.textToSpeech.create({
model: 'eleven_multilingual_v2',
promptText: text,
voice: { type: 'runway-preset', presetId: voiceId || 'Maya' }
}).waitForTaskOutput();
res.json({ audioUrl: task.output[0] });
} catch (error) {
console.error('TTS failed:', error);
res.status(500).json({ error: error.message });
}
});
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from runwayml import RunwayML
app = FastAPI()
client = RunwayML()
class SoundRequest(BaseModel):
prompt: str
@app.post("/api/sound-effect")
async def generate_sound(req: SoundRequest):
try:
task = client.sound_effect.create(
model='eleven_text_to_sound_v2',
prompt_text=req.prompt
).wait_for_task_output()
return {"audio_url": task.output[0]}
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
+rw-integrate-uploads first.+rw-api-reference for details.