From deepgram-pack
Creates minimal Deepgram speech-to-text examples in TypeScript/Node.js and Python for transcribing audio URLs or local files. Use for quick starts, API testing, or setup validation.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin deepgram-packThis skill is limited to using the following tools:
Minimal working examples for Deepgram speech-to-text. Transcribe an audio URL in 5 lines with `createClient` + `listen.prerecorded.transcribeUrl`. Includes local file transcription, Python equivalent, and Nova-3 model selection.
Applies production Deepgram SDK patterns for TypeScript and Python: singleton clients, Aura TTS, audio pipelines, error handling, v5 migration.
Generates minimal TypeScript examples for AssemblyAI transcription from URLs/local files, plus audio intelligence like speaker diarization, sentiment, and summarization.
Expert guidance for Next.js Cache Components and Partial Prerendering (PPR). **PROACTIVE ACTIVATION**: Use this skill automatically when working in Next.js projects that have `cacheComponents: true` in their next.config.ts/next.config.js. When this config is detected, proactively apply Cache Components patterns and best practices to all React Server Component implementations. **DETECTION**: At the start of a session in a Next.js project, check for `cacheComponents: true` in next.config. If enabled, this skill's patterns should guide all component authoring, data fetching, and caching decisions. **USE CASES**: Implementing 'use cache' directive, configuring cache lifetimes with cacheLife(), tagging cached data with cacheTag(), invalidating caches with updateTag()/revalidateTag(), optimizing static vs dynamic content boundaries, debugging cache issues, and reviewing Cache Component implementations.
Share bugs, ideas, or general feedback.
Minimal working examples for Deepgram speech-to-text. Transcribe an audio URL in 5 lines with createClient + listen.prerecorded.transcribeUrl. Includes local file transcription, Python equivalent, and Nova-3 model selection.
npm install @deepgram/sdk completedDEEPGRAM_API_KEY environment variable setimport { createClient } from '@deepgram/sdk';
const deepgram = createClient(process.env.DEEPGRAM_API_KEY!);
async function main() {
const { result, error } = await deepgram.listen.prerecorded.transcribeUrl(
{ url: 'https://static.deepgram.com/examples/Bueller-Life-moves-702702706.wav' },
{
model: 'nova-3', // Latest model — best accuracy
smart_format: true, // Auto-punctuation, paragraphs, numerals
language: 'en',
}
);
if (error) throw error;
const transcript = result.results.channels[0].alternatives[0].transcript;
console.log('Transcript:', transcript);
console.log('Confidence:', result.results.channels[0].alternatives[0].confidence);
}
main();
import { createClient } from '@deepgram/sdk';
import { readFileSync } from 'fs';
const deepgram = createClient(process.env.DEEPGRAM_API_KEY!);
async function transcribeFile(filePath: string) {
const audio = readFileSync(filePath);
const { result, error } = await deepgram.listen.prerecorded.transcribeFile(
audio,
{
model: 'nova-3',
smart_format: true,
// Deepgram auto-detects format, but you can specify:
mimetype: 'audio/wav',
}
);
if (error) throw error;
console.log(result.results.channels[0].alternatives[0].transcript);
}
transcribeFile('./meeting-recording.wav');
import os
from deepgram import DeepgramClient, PrerecordedOptions
client = DeepgramClient(os.environ["DEEPGRAM_API_KEY"])
# URL transcription
url = {"url": "https://static.deepgram.com/examples/Bueller-Life-moves-702702706.wav"}
options = PrerecordedOptions(model="nova-3", smart_format=True, language="en")
response = client.listen.rest.v("1").transcribe_url(url, options)
transcript = response.results.channels[0].alternatives[0].transcript
print(f"Transcript: {transcript}")
print(f"Confidence: {response.results.channels[0].alternatives[0].confidence}")
# Local file transcription
with open("meeting.wav", "rb") as audio:
source = {"buffer": audio.read(), "mimetype": "audio/wav"}
response = client.listen.rest.v("1").transcribe_file(source, options)
print(response.results.channels[0].alternatives[0].transcript)
// Enable diarization (speaker identification)
const { result } = await deepgram.listen.prerecorded.transcribeUrl(
{ url: audioUrl },
{
model: 'nova-3',
smart_format: true,
diarize: true, // Speaker labels
utterances: true, // Turn-by-turn segments
paragraphs: true, // Paragraph formatting
}
);
// Print speaker-labeled output
if (result.results.utterances) {
for (const utterance of result.results.utterances) {
console.log(`Speaker ${utterance.speaker}: ${utterance.transcript}`);
}
}
| Model | Use Case | Speed | Accuracy |
|---|---|---|---|
nova-3 | General — best accuracy | Fast | Highest |
nova-2 | General — proven stable | Fast | Very High |
nova-2-meeting | Conference rooms, multiple speakers | Fast | High |
nova-2-phonecall | Low-bandwidth phone audio | Fast | High |
base | Cost-sensitive, high-volume | Fastest | Good |
whisper-large | Multilingual (100+ languages) | Slow | High |
# TypeScript
npx tsx hello-deepgram.ts
# Python
python hello_deepgram.py
| Error | Cause | Solution |
|---|---|---|
401 Unauthorized | Invalid API key | Check DEEPGRAM_API_KEY |
400 Bad Request | Unsupported audio format | Use WAV, MP3, FLAC, OGG, or M4A |
| Empty transcript | No speech in audio | Verify audio has audible speech |
ENOTFOUND | URL not reachable | Check audio URL is publicly accessible |
Cannot find module '@deepgram/sdk' | SDK not installed | Run npm install @deepgram/sdk |
Proceed to deepgram-core-workflow-a for production transcription patterns or deepgram-core-workflow-b for live streaming.