From pipecat-mcp-server
Scaffolds new Pipecat projects interactively: collects project name, bot type (web/telephony), transport/STT/LLM/TTS/realtime/video services, optional React/Vite/Next.js client, then runs `pc init`.
npx claudepluginhub pipecat-ai/skills --plugin pipecat-cloudThis skill uses the workspace's default tool permissions.
Scaffold a new Pipecat project by collecting configuration from the user and running `pc init` in non-interactive mode.
References Daily and Pipecat for real-time voice/multimodal AI apps with Python pipelines handling STT, LLM, TTS, and low-latency transports.
Builds ElevenLabs conversational AI voice agents: configure via CLI/dashboard, add tools/knowledge, integrate React/React Native/Swift/JS SDKs, test/deploy. For voice AI, phone systems, or ElevenLabs errors.
Guides building voice AI agents with LiveKit Cloud and Agents SDK, including project setup, LiveKit Inference integration, workflows, handoffs, and mandatory testing.
Share bugs, ideas, or general feedback.
Scaffold a new Pipecat project by collecting configuration from the user and running pc init in non-interactive mode.
/init [--output <PATH>]
--output (optional): Directory where the project will be created. Defaults to the current directory.Check if pc is installed by running pc --version. If not installed, tell the user to install it with uv tool install pipecat-ai-cli and stop.
Before asking the user any questions, run pc init --list-options to get the current valid values for all fields. The output is JSON:
{
"bot_type": ["web", "telephony"],
"transports": {
"web": ["daily", "smallwebrtc"],
"telephony": ["twilio", "telnyx", ...]
},
"stt": ["deepgram_stt", "openai_stt", ...],
"llm": ["openai_llm", "anthropic_llm", ...],
"tts": ["cartesia_tts", "elevenlabs_tts", ...],
"realtime": ["openai_realtime", "gemini_live_realtime", ...],
"video": ["heygen_video", "tavus_video", "simli_video"]
}
Use this data to populate the choices in every question below. Do NOT hardcode service lists — always use the values from --list-options.
Walk through the following questions to build the project configuration. After collecting all answers, show a summary and run the command.
Choosing the right interaction method:
--list-options formatted as a readable list, then let the user reply with their choice in chat.Ask the user for a project name. This will be used as the directory name and project identifier.
Ask the user to choose a bot type:
web) - Browser or mobile apptelephony) - Phone callsIf the bot type is web, ask the user to choose a client framework:
react)vanilla)none) - Server only, no client generatedIf the user chose React, ask which dev server:
vite)nextjs)Skip this step entirely for telephony bots.
Show the user the full list of available transports from --list-options, filtered by the selected bot type. Let the user reply with their choice.
If the user chose a daily_pstn transport, ask for mode:
--daily-pstn-mode dial-in--daily-pstn-mode dial-outIf the user chose a twilio_daily_sip transport, ask for mode:
--twilio-daily-sip-mode dial-in--twilio-daily-sip-mode dial-outThen ask if they want to add an additional transport for local testing. This is common — e.g. a telephony bot that also supports WebRTC for development.
Ask the user to choose a pipeline architecture:
cascade) - STT → LLM → TTS pipelinerealtime) - Speech-to-speech modelIf cascade mode, show the full list of available options from --list-options for each service and let the user reply with their choice:
If realtime mode, show all available realtime services and let the user reply with their choice.
For each service question, display the options as a numbered vertical list (one per line) so the user can easily scan and pick one.
Show the user the default feature settings and ask if they want to customize:
Defaults:
If they want to customize, ask about each feature. For video avatar service (web bots only), use the video options from --list-options.
If a video avatar service is selected, video output is automatically enabled.
Ask if they want to generate Pipecat Cloud deployment files (Dockerfile, pcc-deploy.toml). Default is yes.
If deploying to cloud, ask if they want to enable Krisp noise cancellation. Default is no.
After collecting all answers, build the pc init command using non-interactive flags:
pc init \
--name <project_name> \
--bot-type <web|telephony> \
--transport <transport> \
--mode <cascade|realtime> \
[--stt <service>] \
[--llm <service>] \
[--tts <service>] \
[--realtime <service>] \
[--video <service>] \
[--client-framework <react|vanilla|none>] \
[--client-server <vite|nextjs>] \
[--daily-pstn-mode <dial-in|dial-out>] \
[--twilio-daily-sip-mode <dial-in|dial-out>] \
[--recording | --no-recording] \
[--transcription | --no-transcription] \
[--video-input | --no-video-input] \
[--video-output | --no-video-output] \
[--deploy-to-cloud | --no-deploy-to-cloud] \
[--enable-krisp | --no-enable-krisp] \
[--observability | --no-observability] \
--output <output_dir>
For multiple transports, repeat the --transport flag (e.g. --transport twilio --transport smallwebrtc).
Before running the command, show the user a summary of their choices:
Ask the user to confirm before proceeding. If they want to change something, go back and re-ask that specific question.
Run the pc init command. Use --output to specify the output directory (from the --output argument, or default to ./<project_name>).
If the command succeeds, show the user what was generated and suggest next steps:
cd <project_name>/server.env.example to .env and fill in API keysIf deploying to cloud, also mention they can use /pipecat-cloud:deploy to deploy.
pc init fails with validation errors, show the error and help the user fix their choices.