From ai-video-producer
Generate a voice take (ElevenLabs or similar) and lip-sync it to an existing video clip of a character speaking. Use for talking-head shots where the visual is already generated.
npx claudepluginhub danielrosehill/claude-code-plugins --plugin ai-video-producerThis skill uses the workspace's default tool permissions.
Generates a voice line and conforms an existing silent character clip to it.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Generates a voice line and conforms an existing silent character clip to it.
scripts/final/script.md or the shot brief).clips/raw/ or generation/image-to-video/.brief/tools-and-models.md. If a voice clone is in use, source audio lives at assets/voice-clones/<name>.wav.assets/voice-clones/. Save to generation/voice/NN-shortname-vN.wav. Save script + model + voice ID to generation/prompts/NN-shortname-vN-voice.md. Log it.generation/lip-sync/NN-shortname-vN.mp4. Log it.clips/raw/ and suggest /promote-take.