From claude-transcription
Transcribe a podcast episode and produce two outputs in one pass — (1) the raw API transcript exactly as the provider returned it, and (2) a podcast-formatted reading version with filler words removed, section headers added, and speaker labels preserved. Use whenever the user provides a podcast audio file or says "transcribe this podcast".
npx claudepluginhub danielrosehill/claude-code-plugins --plugin claude-transcriptionThis skill uses the workspace's default tool permissions.
Single skill that yields **both** outputs the user wants every time for a podcast:
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Share bugs, ideas, or general feedback.
Single skill that yields both outputs the user wants every time for a podcast:
## Topic) added, speaker labels preserved (**Daniel:** ...).Both files are written to disk. The reading version is what gets pasted into blog posts, show notes, or summaries; the raw version stays for verification.
Per plugin CLAUDE.md: AssemblyAI first, Gemini MCP as fallback.
transcribe-assemblyai (gives diarization + timestamps for free).transcribe-gemini-raw.Do not ask the user which provider — just use AssemblyAI unless they specified one.
speaker_labels: true, punctuate: true, format_text: true).
<source_stem>.raw.md — speaker-labelled prose, no timestamps in this file (timestamps are kept in the sidecar JSON for downstream skills). One paragraph per speaker turn.<source_stem>.assemblyai.json (full API payload) for downstream tools.<source_stem>.podcast.md:
um, uh, er, ah, you know (filler use), like (filler use), I mean (filler use), sort of, kind of (filler use), right? (tic).**Daniel:**, **Guest:**, etc. If <source_stem>_speakers.json exists, use those names; otherwise use AssemblyAI's A/B/C mapped to Speaker A / Speaker B etc., and ask the user to confirm names if it's a 2+ speaker episode.## section headers at natural topic transitions. Aim for one header every 3–8 minutes of content. Headers should be short, descriptive, sentence-case (## Why diarization matters, not ## Diarization). Do not invent topics — derive headers from what's actually said.# <Episode Title> at the top if the user supplied one (or it's inferable from the source filename); otherwise skip the H1.| File | Contents |
|---|---|
<stem>.raw.md | Raw transcript, speaker labels, no editing |
<stem>.assemblyai.json | Full AssemblyAI JSON (timestamps, confidences) |
<stem>.podcast.md | Reading version — fillers removed, headers added |
transcribe-assemblyai or transcribe-gemini-cleaned directly.transcribe-assemblyai.