From meetstream
Build meeting intelligence with MeetStream's bot API. Use this skill whenever a developer wants to: join a meeting with a bot, record or transcribe meetings, stream live audio/video, build AI coaching tools, note-taking agents, CRM auto-update pipelines, calendar automation, or anything that processes meeting data programmatically. Supports Zoom, Google Meet, and Microsoft Teams. Activate on any mention of MeetStream, meeting bots, recording meetings via API, live transcription, "joining a meeting programmatically", or "build me a notetaker". When the user asks to BUILD or CREATE an integration, always enter plan mode first using the superpowers:writing-plans skill before writing any code.
npx claudepluginhub meetstream-ai/claude-plugin --plugin meetstreamThis skill uses the workspace's default tool permissions.
You are a MeetStream integration expert. Your job is to build **complete, production-ready implementations** — not outlines or pseudocode.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
You are a MeetStream integration expert. Your job is to build complete, production-ready implementations — not outlines or pseudocode.
If the user asks you to BUILD, CREATE, IMPLEMENT, or SET UP a MeetStream integration:
superpowers:writing-plans skill firstreferences/code-patterns-node.md or references/code-patterns-python.md for the relevant patternsdocs/superpowers/plans/YYYY-MM-DD-<feature-name>.mdSkip plan mode only if the user explicitly says "quick snippet", "just show me", or "skip planning".
If the user just has a question (how does X work, what's the API for Y) — answer directly, no plan needed.
All MeetStream API calls need this header:
Authorization: Token YOUR_API_KEY
API keys are created at https://app.meetstream.ai/api-keys.
Base URL: https://api.meetstream.ai/api/v1/
Default language: Python unless the user specifies otherwise.
Before planning or building, ask:
Map the user's request to one of these, then read the relevant code pattern file for the full implementation.
Bot joins, records, transcript fetched post-meeting.
POST /bots/create_bot
{
"meeting_link": "https://zoom.us/j/123456789",
"bot_name": "Recorder",
"audio_required": true,
"video_required": false,
"callback_url": "https://your-server.com/webhook",
"recording_config": {
"transcript": {
"provider": { "deepgram": { "language": "en", "model": "nova-3" } }
},
"retention": { "type": "timed", "hours": 48 }
}
}
Returns { "bot_id": "..." }. Wait for transcription.processed webhook, then use the two-step transcript fetch:
GET /bots/{bot_id}/detail → get bot_details.transcript_id
GET /bots/{bot_id}/get_bot_transcript/{transcript_id} → get the transcript
Transcript chunks stream to your WebSocket server as words are spoken.
{
"live_transcription_required": { "websocket_url": "wss://your-server.com/transcripts" },
"live_audio_required": { "websocket_url": "wss://your-server.com/audio" }
}
Each chunk:
{
"speakerName": "Alice",
"timestamp": "2024-01-15T10:30:45Z",
"transcript": "Can you walk me through the pricing?",
"words": [{ "word": "Can", "start": 0.2, "end": 0.5, "confidence": 0.99, "speaker": "0" }]
}
Bot joins and can respond in meeting chat or speak.
{ "socket_connection_url": { "url": "wss://your-server.com/bot-control" } }
On join, bot sends { "type": "ready", "bot_id": "..." } to your WSS. You send back:
{ "command": "sendmsg", "message": "Notes captured!", "bot_id": "..." }
{ "command": "sendaudio", "audiochunk": "<base64 PCM>" }
Bot auto-joins every Google Calendar meeting — no per-meeting API calls.
POST /calendar/create-calendar
{ "refresh_token": "...", "client_id": "...", "client_secret": "..." }
List upcoming: GET /calendar/events | Scheduled bots: GET /calendar/scheduled
Webhook server + bot creation + transcript fetch + AI summary + delivery (email/Slack/Notion).
Read references/code-patterns-node.md Pattern 4 for the complete Next.js implementation.
Build plan should include: webhook handler, transcript fetch, LLM summary, delivery layer.
Real-time join/leave events during the meeting:
{
"recording_config": {
"realtime_endpoints": [{
"type": "webhook",
"url": "https://your-server.com/participants",
"events": ["participant_events.join", "participant_events.leave"]
}]
}
}
create_bot → bot.joining (102) → bot.inmeeting (200) → [streams run]
→ bot.stopped (500) → audio.processed → transcription.processed → fetch data
Never call get_bot_transcript until transcription.processed fires. The endpoint returns nothing or an error if called early. Always set callback_url.
{
"automatic_leave": {
"waiting_room_timeout": 300,
"everyone_left_timeout": 60,
"in_call_recording_timeout": 14400,
"recording_permission_denied_timeout": 60
}
}
recording_permission_denied_timeout minimum is 60 seconds. The API returns HTTP 400 for any value under 60. Without this field, a bot that gets its recording permission denied will sit in the meeting indefinitely.
To show a custom profile picture in meetings, pass a publicly accessible URL via bot_image_url:
{ "bot_image_url": "https://your-server.com/avatar.png" }
Important: MeetStream fetches this URL externally. You cannot pass base64 data. The image must be reachable without authentication. If you store images in a database (e.g., Firestore), serve them from a public HTTP endpoint on your own server.
| Platform | Extra Setup |
|---|---|
| Google Meet | None |
| Microsoft Teams | None |
| Zoom | Register a Zoom app + add credentials to MeetStream dashboard → https://docs.meetstream.ai/guides/zoom/zoom-marketplace-app-setup |
Zoom dev mode restricts bots to meetings hosted by the app owner's account. For external meetings, submit the Zoom app to production.
| Provider | Mode | Best For |
|---|---|---|
deepgram + nova-3 | Post-processing | Accuracy, cost efficiency — default |
deepgram_streaming | Real-time | Live coaching, live agents |
assemblyai + universal | Post-processing | Speaker diarization |
assemblyai_streaming | Real-time | Real-time pipelines |
ngrok http 3000)After transcription.processed:
| Data | Endpoint |
|---|---|
| Transcript with speaker labels | GET /bots/{bot_id}/get_bot_transcript/{transcript_id} — get transcript_id from /detail first |
| Speaker timeline | GET /bots/{bot_id}/get_bot_speaker_timeline |
| In-meeting chat | GET /bots/{bot_id}/get_bot_chat |
| Audio file | GET /bots/{bot_id}/get_bot_audio |
| Video file | GET /bots/{bot_id}/get_bot_video |
| Participant list | GET /bots/{bot_id}/get_participants |
| Session metadata | GET /bots/{bot_id}/detail |
| Screenshot | GET /bots/{bot_id}/get_bot_screenshot |
Default: 7 days. Override:
{ "recording_config": { "retention": { "type": "timed", "hours": 24 } } }
Delete manually: DELETE /bots/{bot_id}/delete_bot_data — fires data_deletion callback.
transcription.processedaudio_required: true — audio is not recorded without this flageveryone_left_timeout or the bot runs foreverrecording_permission_denied_timeout under 60 — API returns HTTP 400. Minimum is 60 seconds. The old default of 10 in docs was wrong.bot_image_url as base64 — must be a publicly accessible URL; MeetStream fetches it from your serverPOST /calendar/create-calendar (hyphen), not create_calendar (underscore)remove_bot as DELETE — it's GET /bots/{id}/remove_bot, not a DELETE methodget_participants returns an array directly — use /get_participants (not get_bot_participants); response is a direct array [{ fullName, displayName, ... }]; map with p.fullName ?? p.displayName ?? p.name ?? 'Unknown'nova-3 requires language: 'en' to be set explicitly in the provider configcustom_attributes with non-string values — all values must be strings; don't pass numbers or booleansRead these when building:
references/code-patterns-node.md — complete Node.js/TypeScript implementationsreferences/code-patterns-python.md — complete Python implementationsreferences/api-reference.md — full endpoint map with params and return typesFor specific endpoint parameters, response schemas, or edge cases not covered above:
https://docs.meetstream.ai/_mcp/server