npx claudepluginhub galbaz1/video-research-mcpThis skill uses the workspace's default tool permissions.
Generate AI video using Veo (MCP tools) or Sora (direct API script). This skill covers provider selection, generation modes, defaults, and the draft-to-final workflow.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Performs token-optimized structural code search using tree-sitter AST parsing to discover symbols, outline files, and unfold code without reading full files.
Generate AI video using Veo (MCP tools) or Sora (direct API script). This skill covers provider selection, generation modes, defaults, and the draft-to-final workflow.
For full tool references, model IDs, camera reliability data, and negative prompt blocks, see references/provider-details.md.
Choose before writing prompts.
| Need | Provider | Why |
|---|---|---|
| Native audio, 4K output, style/asset references, video extension | Veo | Veo 3.1 has the richer media-control surface |
| 1080p production, text-heavy scenes, clean draft/final model split | Sora | sora-2 drafts + sora-2-pro finals is the most reliable loop |
| High-value hero shot, uncertain which provider wins | Both | Generate with both, pick winner from review evidence |
Decision rules:
Prompt describes the scene. Both providers support this.
mcp__veo__generate_video -- supports number_of_videos (1-4) for multi-takesora_direct.py create-and-poll -- single generation per callAnimate a static image. Source image quality is critical.
mcp__veo__animate_image -- absolute path to source image requiredsora_direct.py create-and-poll --input-reference -- auto-resizes/crops source to match output sizeContinue an existing clip with new content.
mcp__veo__extend_video_clip -- Veo 3.1 only, continues from last secondsora_direct.py extend --id <video_id> -- extends by promptGenerate video with 1-3 reference images for visual consistency.
mcp__veo__generate_video_with_style -- Veo 3.1 only"asset" (preserves composition) or "style" (preserves palette/grain/lighting)| Parameter | Draft | Final |
|---|---|---|
| Model | veo-3.1-fast-generate-preview | veo-3.1-generate-preview |
| Resolution | 720p | 1080p |
| Duration | 6s | 8s |
| Aspect ratio | 16:9 | 16:9 |
| Negative prompt | Always include default block (see references) | Always include default block |
| Parameter | Draft | Final |
|---|---|---|
| Model | sora-2 | sora-2-pro |
| Resolution | 1280x720 | 1920x1080 (or 1080x1920 portrait) |
| Duration | 4s | 4s (or 8s for extended scenes) |
| Draft count | 2 | 1 |
Generate cheap drafts first, review, then produce the final with the quality model. This saves 60-70% on failed iterations.
veo-3.1-fast-generate-preview at 720pveo-3.1-generate-preview at 1080psora_direct.py production --stage drafts -- generates 2 clips with sora-2 at 720psora_direct.py review-drafts -- Gemini scores each draft on 5 dimensions (prompt fidelity, temporal stability, surface realism, lighting coherence, text preservation)sora_direct.py finalize-from-review -- selects winner, launches sora-2-pro final at 1080pAll three steps require --run-dir with an absolute path. Relative paths break the review subprocess.
Camera choice directly affects generation success rate. Summary:
| Movement | Success Rate | Recommendation |
|---|---|---|
| Static | 94-97% | Default for hero shots, UI demos |
| Zoom | 81-87% | Good for reveals and emphasis |
| Pan | 73-85% | Acceptable for environment shots |
| Tilt | 67-81% | Use descriptive phrasing, not "tilt" |
| Tracking | 58-68% | B-roll only, generate 3 variants |
| Crane | 44-52% | Expect retakes |
| Combined | ~29% | Never -- split into separate shots |
For movements below 70%, always generate 3 variants. Full percentages and prompt phrasing in references/provider-details.md.
extend_video_clip and generate_video_with_style require Veo 3.1OPENAI_API_KEY must be set--run-dir must be absolute for production workflows--size| Provider | Model | Resolution | Duration | Cost |
|---|---|---|---|---|
| Veo | 3.1 Standard | 1080p | 8s | ~$3.20 |
| Veo | 3.1 Fast | 720p | 8s | ~$1.20 |
| Sora | sora-2 | 720p | 4s | ~$0.85 |
| Sora | sora-2-pro | 1080p | 4s | ~$3.50 |
| Sora | sora-2-pro | 1080p | 8s | ~$5.60 |
Typical scene (draft + final): $4-6 total.
Follow this template for both providers:
[Camera+Lens]: [Subject with physical detail] [Action with force verbs],
in [Setting with atmosphere], lit by [Named physical light source].
Style: [Texture micro-details, film grain]. Audio: [Ambient/SFX].
Rules:
veo in ~/.claude/.mcp.jsonGOOGLE_API_KEY or GEMINI_API_KEY~/Videos/veo-generated//Users/fausto_home/.claude/skills/sora/scripts/sora_direct.pyOPENAI_API_KEYuv run --with requests python <script> <command> [args]