From claude-transcription
Onboarding for Claude-Transcription — registers the user's transcription and denoise backends (cloud APIs, MCP servers, local binaries, custom endpoints) into a user-data config file. No provider hardcoded; user picks what they have. Use when the user asks to configure claude-transcription, set up the plugin, add a new transcription backend, or pick a default provider.
npx claudepluginhub danielrosehill/claude-code-plugins --plugin claude-transcriptionThis skill uses the workspace's default tool permissions.
Interactive setup that registers the user's actual transcription and denoise backends, then writes them to a config file in the canonical user-data dir. Skills in this plugin read that config — no provider names, API endpoints, or keys are hardcoded.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Share bugs, ideas, or general feedback.
Interactive setup that registers the user's actual transcription and denoise backends, then writes them to a config file in the canonical user-data dir. Skills in this plugin read that config — no provider names, API endpoints, or keys are hardcoded.
${CLAUDE_USER_DATA:-${XDG_DATA_HOME:-$HOME/.local/share}/claude-plugins}/claude-transcription/config.json
Resolution order:
$CLAUDE_USER_DATA/claude-transcription/config.json if $CLAUDE_USER_DATA is set$XDG_DATA_HOME/claude-plugins/claude-transcription/config.json if $XDG_DATA_HOME is set$HOME/.local/share/claude-plugins/claude-transcription/config.jsonCreate parent directories with mkdir -p if missing. Never write under ~/.claude/.
If ~/.config/claude-transcription/config.json exists (legacy, schema v1) and the new path does not, offer to migrate. The legacy schema is flat; convert it to schema v2 (below) by mapping the single transcription_provider / denoise_provider strings into the providers list with default: true.
{
"schema_version": 2,
"providers": {
"transcription": [
{
"alias": "<short alias the user uses to refer to this backend>",
"type": "api | mcp | local-binary | custom-endpoint",
"default": true,
"...type-specific fields...": "..."
}
],
"denoise": [
{ "alias": "...", "type": "...", "default": true }
]
}
}
type | Required fields | Optional fields | Notes |
|---|---|---|---|
api | api_key_env (env var name OR op://... reference) | endpoint (override default), model | Cloud API with bearer key |
mcp | mcp_server (full MCP server name as it appears in the MCP list) | tool_name (specific tool to call) | Backend invoked via an MCP tool |
local-binary | binary_name (whisper, whisper.cpp, ffmpeg, etc.) | model, device, extra_args | Local CLI binary |
custom-endpoint | endpoint (URL) | api_key_env, auth_header, request_format | User's own server (vLLM, llama.cpp server, etc.) |
Credentials rule: never store the actual key in this file. Always reference via env var name or op://vault/item/field (resolved at runtime by op read). The plugin supports both forms.
Ask the user, in order:
Which transcription backends do you want to register? Pick any number; I'll ask for details on each.
Suggestions (not exhaustive — accept any):
faster-whisper or whisper.cpp, local binary)For each picked backend:
gemini, aai, whisper-local, my-vllm)api, mcp, local-binary, or custom-endpoint)Same loop. Suggestions:
Which transcription backend should I use when you say "transcribe this" without naming a provider?
Same for denoise. Mark the chosen entries with "default": true. Only one default per category.
For every api or custom-endpoint provider with an api_key_env: check printenv "$VARNAME" returns a non-empty value. If empty, list the missing env vars at the end and tell the user to set them in their shell profile (~/.bashrc, ~/.zshenv, etc.). Don't fail setup over missing keys — the user might set them later.
For op:// references: do not test-resolve during setup (would prompt for biometric unlock). Just record the reference.
For mcp providers: confirm the named MCP server appears in the user's installed MCP list (run claude mcp list and grep). Warn but don't fail if not.
For local-binary: check the binary is on $PATH (command -v <binary>). Warn if not found.
Beyond the full interview, support:
show — pretty-print the current config (mask any field containing key, token, secret).add <category> — add one new backend (transcription or denoise) without re-asking for the others.remove <category> <alias> — remove a registered backend.set-default <category> <alias> — change which backend is the default.validate — re-run credential and binary checks against the existing config.For non-interactive setup (e.g., from a script):
configure add transcription --alias gemini --type mcp --mcp-server jungle-shared__gemini-transcription --default
configure add transcription --alias aai --type api --api-key-env ASSEMBLYAI_API_KEY
configure set-default transcription aai
gemini and a denoise gemini are fine; two transcription gemini entries are not).type must be one of the four allowed values. Reject unknowns.mcp_server names should be validated against claude mcp list (warn if missing).op:// references only.default_output_dir field — output paths are runtime-resolved per references/output-path-resolver.md.~/.claude/. That's Claude's config surface.