npx claudepluginhub galbaz1/video-research-mcpThis skill uses the workspace's default tool permissions.
You have access to the `video-explainer-mcp` MCP server, which wraps the [video_explainer](https://github.com/prajwal-y/video_explainer) CLI to synthesize explainer videos from text content.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Performs token-optimized structural code search using tree-sitter AST parsing to discover symbols, outline files, and unfold code without reading full files.
You have access to the video-explainer-mcp MCP server, which wraps the video_explainer CLI to synthesize explainer videos from text content.
This server is a synthesis companion to video-research-mcp. Research extracts knowledge; this server turns it into video. The pipeline is: content → script → narration → scenes → voiceover → storyboard → render.
| I want to... | Use this tool |
|---|---|
| Create a new video project | explainer_create |
| Feed content into a project | explainer_inject |
| Check project progress | explainer_status |
| List all projects | explainer_list |
| Run the full pipeline | explainer_generate |
| Run one pipeline step | explainer_step |
| Preview render (blocking) | explainer_render |
| Start background render | explainer_render_start |
| Check render progress | explainer_render_poll |
| Generate short-form video | explainer_short |
| Improve a step's output | explainer_refine |
| Add iterative feedback | explainer_feedback |
| Verify script accuracy | explainer_factcheck |
| Add sound effects | explainer_sound |
| Add background music | explainer_music |
Steps must run in order. Each step depends on the previous step's output:
1. script — Generate video script from input content
2. narration — Convert script to narration text
3. scenes — Generate scene descriptions
4. voiceover — Synthesize speech audio (TTS)
5. storyboard — Create visual storyboard
6. render — Combine into final video
Use explainer_generate to run all steps, or explainer_step for one at a time.
Before running the pipeline, inject content:
explainer_create(project_id="quantum-computing")
explainer_inject(
project_id="quantum-computing",
content="# Quantum Computing\n\nKey concepts:\n- Superposition...",
filename="research.md"
)
Content can be:
research_deepvideo_analyzeFor long renders (1080p, 4K), use the start/poll pattern:
# Start render in background
result = explainer_render_start(project_id="my-video", resolution="1080p", fast=False)
job_id = result["job_id"]
# Poll every 30 seconds
status = explainer_render_poll(job_id=job_id)
# status["status"] is "pending", "running", "completed", or "failed"
After generating the pipeline:
explainer_factcheck(project_id) — Verify claimsexplainer_feedback(project_id, "Make the intro more engaging") — Add notesexplainer_refine(project_id, phase="script") — Improve specific phaseexplainer_generate(project_id, from_step="narration")| Provider | Quality | Cost | Timestamps | Status |
|---|---|---|---|---|
mock | None | Free | N/A | Default — for testing |
elevenlabs | Excellent | $165-330/1M chars | Native | Recommended |
openai | Good | $15/1M chars | Whisper | Budget alternative |
gemini | Good | ~$16/1M chars | Whisper | Experimental |
edge | Variable | Free | Native | Deprecated — auth issues |
Set via EXPLAINER_TTS_PROVIDER in ~/.config/video-research-mcp/.env.
When using elevenlabs as TTS provider, configure voice characteristics via env vars:
| Variable | Range | Default | Effect |
|---|---|---|---|
ELEVENLABS_VOICE_ID | voice ID string | Rachel | Which voice to use |
ELEVENLABS_STABILITY | 0.0-1.0 | 0.45 | Lower = more expressive, higher = more consistent |
ELEVENLABS_SIMILARITY_BOOST | 0.0-1.0 | 0.75 | How closely to match the reference voice |
ELEVENLABS_SPEED | 0.7-1.2 | 1.0 | Speech pacing (0.7 = slow, 1.2 = fast) |
These can also be set per-project in config.yaml under tts:.
Recommended settings for narration: stability=0.45, similarity=0.75, speed=1.0 (natural pacing with emotional range).
For complex videos, create a POD (Production Order Document) first — a structured blueprint with script, storyboard, audio direction, and visual specs. PODs live in docs/plans/ and can be used as input via /ve:explainer → "From a Production Order".
All tools return error dicts on failure:
{
"error": "description",
"category": "SUBPROCESS_FAILED",
"hint": "actionable fix",
"retryable": false
}
Common categories:
EXPLAINER_NOT_FOUND — Set EXPLAINER_PATHPROJECT_NOT_FOUND — Check project ID with explainer_listNODE_NOT_FOUND — Install Node.js 20+FFMPEG_NOT_FOUND — Install FFmpegTTS_FAILED — Check TTS provider and API keyRENDER_FAILED — Check Remotion installation