npx claudepluginhub dojocodinglabs/remotion-superpowers --plugin remotion-superpowersWant just this skill?
Then install: npx claudepluginhub u/[userId]/[slug]
Full video production workflow for Remotion projects. Teaches how to orchestrate MCP tools (TTS, music, SFX, stock footage, video analysis) into complete Remotion compositions. Use this skill whenever producing a video that needs audio, voiceovers, music, stock footage, or analyzing existing video files.
This skill uses the workspace's default tool permissions.
rules/3d-content.mdrules/animation-presets.mdrules/asset-management.mdrules/audio-integration.mdrules/captions-workflow.mdrules/ci-rendering.mdrules/data-visualization.mdrules/elevenlabs-advanced.mdrules/image-generation.mdrules/music-scoring.mdrules/production-pipeline.mdrules/replicate-models.mdrules/sound-effects.mdrules/stock-footage-workflow.mdrules/video-analysis.mdrules/video-generation.mdrules/visual-effects.mdrules/voiceover-sync.mdRemotion Production Workflow
This skill teaches how to produce complete videos with Remotion by orchestrating multiple MCP tools together. It covers the full pipeline from concept to rendered MP4.
Available MCP Tools
You have access to these MCP servers for media production:
remotion-media (via KIE)
generate_tts— Text-to-speech voiceovers (ElevenLabs TTS)generate_music— Background music (Suno V3.5–V5)generate_sfx— Sound effects (ElevenLabs SFX V2)generate_image— AI images (Nano Banana Pro)generate_video— AI video clips (Veo 3.1)generate_subtitles— Transcribe audio/video to SRT (Whisper)list_assets— List all generated media in the project
TwelveLabs (video understanding)
- Index and analyze video files
- Semantic search within videos ("find the part where...")
- Scene detection, object detection, speaker identification
- Video summarization
Pexels (stock footage)
searchPhotos— Search free stock photossearchVideos— Search free stock videosgetVideo/getPhoto— Get details by IDdownloadVideo— Download video to project
ElevenLabs (optional — advanced voice)
- Voice cloning from audio samples
- Advanced TTS with custom voices
- Audio isolation and processing
- Transcription
Replicate (optional — 100+ AI models)
replicate_run— Run a model synchronously (images)replicate_create_prediction— Start async prediction (video)replicate_get_prediction— Poll prediction status- Image models: FLUX 1.1 Pro, Imagen 4, Ideogram v3, FLUX Kontext
- Video models: Wan 2.5 (T2V, I2V), Kling 2.6 Pro
Production Pipeline
Read individual rule files for detailed workflows:
rules/production-pipeline.md— End-to-end workflow from concept to final renderrules/audio-integration.md— How to integrate generated audio into Remotion compositionsrules/voiceover-sync.md— Syncing TTS voiceovers with animations and captionsrules/music-scoring.md— Generating and timing background musicrules/stock-footage-workflow.md— Searching, downloading, and using stock footage in Remotionrules/video-analysis.md— Using TwelveLabs to analyze and select clips from existing footagerules/captions-workflow.md— TikTok-style animated captions using @remotion/captions and Whisperrules/animation-presets.md— Reusable animation patterns (fade, slide, scale, typewriter, stagger)rules/3d-content.md— Three.js and React Three Fiber via @remotion/threerules/data-visualization.md— Animated charts, dashboards, and number countersrules/visual-effects.md— Light leaks, Lottie, film grain, vignettes, Ken Burnsrules/ci-rendering.md— GitHub Actions workflows for automated video renderingrules/replicate-models.md— Replicate MCP model catalog, usage, and decision guiderules/image-generation.md— AI image prompt engineering, provider selection, Remotion integrationrules/video-generation.md— AI video clip generation, I2V pipeline, sequencing in Remotionrules/sound-effects.md— SFX generation, prompt engineering, timing to visual eventsrules/elevenlabs-advanced.md— Voice cloning, custom TTS parameters, multi-voice scriptsrules/asset-management.md— File organization, naming conventions, staticFile() reference
Key Principles
- Audio drives timing — Generate voiceover first, get its duration, then set composition length to match.
- Assets go in
public/— All generated media files (audio, video, images) must be saved to the project'spublic/directory so Remotion can access them viastaticFile(). - Use Remotion's audio components — Always use
<Audio>component withstaticFile()for audio. Never use HTML<audio>tags. - Frame-based timing — Remotion uses frames, not seconds. Convert with
fps * seconds. At 30fps, 1 second = 30 frames. - Progressive composition — Build the video in layers: visuals first, then voiceover, then music, then SFX.
- Preview frequently — Use
npm run devto preview after each major change. The Remotion player updates live.
Similar Skills
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.