From elevenlabs
Convert text to speech using ElevenLabs voice AI. Use when generating audio from text, creating voiceovers, building voice apps, or synthesizing speech in 70+ languages.
npx claudepluginhub cameri/claude-skills --plugin elevenlabsThis skill uses the workspace's default tool permissions.
Generate natural speech from text - supports 70+ languages, multiple models for quality vs latency tradeoffs.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Analyzes BMad project state from catalog CSV, configs, artifacts, and query to recommend next skills or answer questions. Useful for help requests, 'what next', or starting BMad.
Generate natural speech from text - supports 70+ languages, multiple models for quality vs latency tradeoffs.
Setup: See Installation Guide. For JavaScript, use
@elevenlabs/*packages only.
from elevenlabs import ElevenLabs
client = ElevenLabs()
audio = client.text_to_speech.convert(
text="Hello, welcome to ElevenLabs!",
voice_id="JBFqnCBsd6RMkjVDRZzb", # George
model_id="eleven_multilingual_v2"
)
with open("output.mp3", "wb") as f:
for chunk in audio:
f.write(chunk)
import { ElevenLabsClient } from "@elevenlabs/elevenlabs-js";
import { createWriteStream } from "fs";
const client = new ElevenLabsClient();
const audio = await client.textToSpeech.convert("JBFqnCBsd6RMkjVDRZzb", {
text: "Hello, welcome to ElevenLabs!",
modelId: "eleven_multilingual_v2",
});
audio.pipe(createWriteStream("output.mp3"));
curl -X POST "https://api.elevenlabs.io/v1/text-to-speech/JBFqnCBsd6RMkjVDRZzb" \
-H "xi-api-key: $ELEVENLABS_API_KEY" -H "Content-Type: application/json" \
-d '{"text": "Hello!", "model_id": "eleven_multilingual_v2"}' --output output.mp3
| Model ID | Languages | Latency | Best For |
|---|---|---|---|
eleven_v3 | 70+ | Standard | Highest quality, emotional range |
eleven_multilingual_v2 | 29 | Standard | High quality, long-form content |
eleven_flash_v2_5 | 32 | ~75ms | Ultra-low latency, real-time |
eleven_flash_v2 | English | ~75ms | English-only, fastest |
eleven_turbo_v2_5 | 32 | ~250-300ms | Balanced quality/speed |
eleven_turbo_v2 | English | ~250-300ms | English-only, balanced |
Use premade voices (free, no extra charge) or create custom voices in the dashboard. See references/premade-voices.md for the full list of 45 premade voices with IDs, gender, accent, and use case.
Commonly used premade voices:
JBFqnCBsd6RMkjVDRZzb - George (male, British, raspy, narration)EXAVITQu4vr4xnSDxMaL - Sarah (female, American, soft, news)onwK4e9ZLuTAKqWW03F9 - Daniel (male, British, deep, news presenter)XB0fDUnXU5powFXDhCwa - Charlotte (female, English-Swedish, conversational)21m00Tcm4TlvDq8ikWAM - Rachel (female, American, calm, narration)nPczCjzI2devNBz1zQrb - Brian (male, American, deep, narration)voices = client.voices.get_all()
for voice in voices.voices:
print(f"{voice.voice_id}: {voice.name}")
Fine-tune how the voice sounds:
from elevenlabs import VoiceSettings
audio = client.text_to_speech.convert(
text="Customize my voice settings.",
voice_id="JBFqnCBsd6RMkjVDRZzb",
voice_settings=VoiceSettings(
stability=0.5,
similarity_boost=0.75,
style=0.5,
speed=1.0, # 0.25 to 4.0 (default 1.0)
use_speaker_boost=True
)
)
Force specific language for pronunciation:
audio = client.text_to_speech.convert(
text="Bonjour, comment allez-vous?",
voice_id="JBFqnCBsd6RMkjVDRZzb",
model_id="eleven_multilingual_v2",
language_code="fr" # ISO 639-1 code
)
Controls how numbers, dates, and abbreviations are converted to spoken words. For example, "01/15/2026" becomes "January fifteenth, twenty twenty-six":
"auto" (default): Model decides based on context"on": Always normalize (use when you want natural speech)"off": Speak literally (use when you want "zero one slash one five...")audio = client.text_to_speech.convert(
text="Call 1-800-555-0123 on 01/15/2026",
voice_id="JBFqnCBsd6RMkjVDRZzb",
apply_text_normalization="on"
)
When generating long audio in multiple requests, the audio can have pops, unnatural pauses, or tone shifts at the boundaries. Request stitching solves this by letting each request know what comes before/after it:
# First request
audio1 = client.text_to_speech.convert(
text="This is the first part.",
voice_id="JBFqnCBsd6RMkjVDRZzb",
next_text="And this continues the story."
)
# Second request using previous context
audio2 = client.text_to_speech.convert(
text="And this continues the story.",
voice_id="JBFqnCBsd6RMkjVDRZzb",
previous_text="This is the first part."
)
| Format | Description |
|---|---|
mp3_44100_128 | MP3 44.1kHz 128kbps (default) - compressed, good for web/apps |
mp3_44100_192 | MP3 44.1kHz 192kbps (Creator+) - higher quality compressed |
mp3_44100_64 | MP3 44.1kHz 64kbps - lower quality, smaller files |
mp3_22050_32 | MP3 22.05kHz 32kbps - smallest MP3 files |
pcm_16000 | Raw PCM 16kHz - use for real-time processing |
pcm_22050 | Raw PCM 22.05kHz |
pcm_24000 | Raw PCM 24kHz - good balance for streaming |
pcm_44100 | Raw PCM 44.1kHz (Pro+) - CD quality |
pcm_48000 | Raw PCM 48kHz (Pro+) - highest quality |
ulaw_8000 | μ-law 8kHz - standard for phone systems (Twilio, telephony) |
alaw_8000 | A-law 8kHz - telephony (alternative to μ-law) |
opus_48000_64 | Opus 48kHz 64kbps - efficient streaming codec |
wav_44100 | WAV 44.1kHz - uncompressed with headers |
For real-time applications, use the stream method (returns audio chunks as they're generated):
audio_stream = client.text_to_speech.stream(
text="This text will be streamed as audio.",
voice_id="JBFqnCBsd6RMkjVDRZzb",
model_id="eleven_flash_v2_5" # Ultra-low latency
)
for chunk in audio_stream:
play_audio(chunk)
See references/streaming.md for WebSocket streaming.
try:
audio = client.text_to_speech.convert(
text="Generate speech",
voice_id="invalid-voice-id"
)
except Exception as e:
print(f"API error: {e}")
Common errors:
Monitor character usage via response headers (x-character-count, request-id):
response = client.text_to_speech.convert.with_raw_response(
text="Hello!", voice_id="JBFqnCBsd6RMkjVDRZzb", model_id="eleven_multilingual_v2"
)
audio = response.parse()
print(f"Characters used: {response.headers.get('x-character-count')}")
ElevenLabs charges per character synthesized. Keep generated text concise: