By elevenlabs
ElevenLabs voice synthesis skill for converting text to natural-sounding speech in multiple languages and voices.
npx claudepluginhub joshuarweaver/cascade-content-creation-misc-1 --plugin elevenlabs-skillsBuild voice AI agents with ElevenLabs. Use when creating voice assistants, customer service bots, interactive voice characters, or any real-time voice conversation experience.
Complete reference for configuring conversational AI agents.
Extend your agent with custom capabilities. Tools let the agent take actions beyond just talking.
The ElevenLabs CLI is the recommended way to create and manage agents:
Make outbound phone calls using your ElevenLabs agent via Twilio integration.
Add a voice AI agent to any website with the ElevenLabs conversation widget.

Agent skills for ElevenLabs developer products. These skills follow the Agent Skills specification and can be used with any compatible AI coding assistant.
npx skills add elevenlabs/skills
| Skill | Description |
|---|---|
| text-to-speech | Convert text to lifelike speech using ElevenLabs' AI voices |
| speech-to-text | Transcribe audio files to text with timestamps |
| agents | Build conversational voice AI agents |
| sound-effects | Generate sound effects from text descriptions |
| music | Generate music tracks using AI composition |
| voice-isolator | Remove background noise and isolate vocals/speech from audio |
| setup-api-key | Guide through obtaining and configuring an ElevenLabs API key |
All skills require an ElevenLabs API key. Set it as an environment variable:
export ELEVENLABS_API_KEY="your-api-key"
Get your API key from the setup-api-key skill or use the ElevenLabs dashboard.
Most skills include examples for:
pip install elevenlabsnpm install @elevenlabs/elevenlabs-jsJavaScript SDK Warning: Always use
@elevenlabs/elevenlabs-js. Do not usenpm install elevenlabs(that's an outdated v1.x package).
See the installation guide in any skill's references/ folder for complete setup instructions including migration from deprecated packages.
The evals/ directory contains trigger and functional evaluations for all skills.
# Run all evaluations (trigger + functional)
python3 evals/run_all.py -v
# Trigger evals only — tests if skills fire for the right queries (~3 min)
python3 evals/run_all.py --trigger-only -v
# Functional evals only — tests if skills produce correct output (~15 min)
python3 evals/run_all.py --functional-only -v
# Specific skills
python3 evals/run_all.py --skills text-to-speech agents -v
# Custom model (see `cursor-agent --list-models`)
python3 evals/run_all.py --model gpt-5.4-high -v
Results are saved to evals/results/<timestamp>/ with a report.md summary and results.json for programmatic access.
Functional evals use an isolated cursor-agent workspace per test case (under that results tree); they do not modify skill sources under each skill’s directory.
Requires the Cursor Agent CLI (cursor-agent on your PATH; override binary with CURSOR_AGENT) and Cursor authentication (cursor-agent login or CURSOR_API_KEY).
MIT
Agent that simplifies and refines code for clarity, consistency, and maintainability while preserving functionality
Uses power tools
Uses Bash, Write, or Edit tools
Comprehensive UI/UX design plugin for mobile (iOS, Android, React Native) and web applications with design systems, accessibility, and modern patterns