From claude-transcription
One-time setup for local Whisper transcription — installs faster-whisper and downloads a default model. Use when the user asks to set up whisper, install whisper, or prepare for local transcription.
npx claudepluginhub danielrosehill/claude-code-plugins --plugin claude-transcriptionThis skill uses the workspace's default tool permissions.
One-time install for offline transcription.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Share bugs, ideas, or general feedback.
One-time install for offline transcription.
Ensure uv is available (which uv). If not, install: curl -LsSf https://astral.sh/uv/install.sh | sh.
Install faster-whisper as a tool:
uv tool install faster-whisper
(Or create a dedicated venv if the user prefers — check preference.)
Pre-download the default model (medium) so first real transcription doesn't block:
uv run --with faster-whisper python -c "from faster_whisper import WhisperModel; WhisperModel('medium')"
Write/update ~/.config/claude-transcription/config.json with:
{ "whisper_installed": true, "whisper_model": "medium", "whisper_device": "cpu" }
Confirm with a 5-second synthetic test (optional) — or just tell the user setup is complete.
Ask the user which model they want pre-downloaded:
base — 150 MB, fastestsmall — 500 MB, good for most usemedium — 1.5 GB, default balancelarge-v3 — 3 GB, highest accuracyIf the user wants ROCm acceleration, warn that CTranslate2 ROCm support is experimental and document the extra steps (install ctranslate2 built against ROCm, set whisper_device: "cuda"). Default to CPU.