From claude-transcription
Transcribe audio locally using Whisper (offline, no cloud). Use when the user asks for a local transcription, offline transcription, privacy-preserving transcription, or explicitly requests Whisper.
npx claudepluginhub danielrosehill/claude-code-plugins --plugin claude-transcriptionThis skill uses the workspace's default tool permissions.
Offline transcription with Whisper. Requires `setup-whisper` to have been run once.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Share bugs, ideas, or general feedback.
Offline transcription with Whisper. Requires setup-whisper to have been run once.
faster-whisper (CTranslate2 backend) — 4x faster than openai-whisper at similar accuracy.
uv run --with faster-whisper python <<'PY'
from faster_whisper import WhisperModel
model = WhisperModel("medium", device="cpu", compute_type="int8")
segments, info = model.transcribe("INPUT.wav", beam_size=5, vad_filter=True)
for s in segments:
print(f"[{s.start:.2f}s -> {s.end:.2f}s] {s.text}")
PY
base / small → fast, decent quality, good default for Englishmedium → good balance (default)large-v3 → best quality, slow on CPUConfig whisper_model in config.json (default medium).
Daniel's workstation has an AMD GPU. faster-whisper supports ROCm via CTranslate2 builds but it's fragile; default CPU unless user has set whisper_device: "cuda" / "rocm".
<source_stem>.whisper.md — timestamped plain text.
<source_stem>.whisper.json — segments with start/end/text for downstream use.
If which faster-whisper fails, tell the user to run /setup-whisper first.