From openai-skills-13
Generates, edits, extends, polls, lists, downloads Sora AI videos; creates character references; runs local multi-video queues via CLI. For demos, marketing, UI mocks.
npx claudepluginhub joshuarweaver/cascade-code-languages-misc-1 --plugin openai-skills-13This skill uses the workspace's default tool permissions.
Creates or manages Sora video jobs for the current project (product demos, marketing spots, cinematic shots, social clips, UI mocks). Defaults to `sora-2` with structured prompt augmentation and prefers the bundled CLI for deterministic runs. Note: `$sora` is a skill tag in prompts, not a shell command.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Checks Next.js compilation errors using a running Turbopack dev server after code edits. Fixes actionable issues before reporting complete. Replaces `next build`.
Creates or manages Sora video jobs for the current project (product demos, marketing spots, cinematic shots, social clips, UI mocks). Defaults to sora-2 with structured prompt augmentation and prefers the bundled CLI for deterministic runs. Note: $sora is a skill tag in prompts, not a shell command.
create-characterextendeditstatus, poll, or downloadcreate-batch (local fan-out, not the Batch API)references/video-api.mdcreate (or create-and-poll if they need a ready asset in one step)--use-case, --scene, --camera, etc.) instead of hand-writing a long structured prompt. If you already have a structured prompt file, pass --no-augment.scripts/sora.py) with sensible defaults. For long prompts, prefer --prompt-file to avoid shell-escaping issues.create-and-poll).create calls.edit; if they want the shot to continue in time, prefer extend.OPENAI_API_KEY must be set for live API calls.If the key is missing, give the user these steps:
OPENAI_API_KEY as an environment variable in their system.sora-2 (use sora-2-pro for higher fidelity).1280x720.4 (allowed: "4", "8", "12", "16", "20").sora-2-pro is required for 1920x1080 and 1080x1920.openai package). If high-level SDK helpers lag the latest Sora guide, use low-level client.post/get/delete inside the official SDK rather than standalone HTTP code.OPENAI_API_KEY before any live API call.UV_CACHE_DIR=/tmp/uv-cache.input_reference objects use either file_id or image_url; uploaded file paths use multipart.create-batch in scripts/sora.py is a local concurrent queue, not the official Batch API.scripts/sora.py unless the user asks.Audio: and Dialogue: lines and keep it short.sora-2 and sora-2-pro.seconds parameter and currently supports 4, 8, 12, 16, and 20.2-4 second non-human MP4s in 16:9 or 9:16, at 720p-1080p.20 seconds each, up to six times per source video, for a maximum total length of 120 seconds.POST /v1/videos only, with JSON bodies rather than multipart uploads.references/video-api.md for the supported sizes).Reformat prompts into a structured, production-oriented spec. Only make implicit details explicit; do not invent new creative requirements.
Template (include only relevant lines):
Use case: <where the clip will be used>
Primary request: <user's main prompt>
Scene/background: <location, time of day, atmosphere>
Subject: <main subject>
Action: <single clear action>
Camera: <shot type, angle, motion>
Lighting/mood: <lighting + mood>
Color palette: <3-5 color anchors>
Style/format: <film/animation/format cues>
Timing/beats: <counts or beats>
Audio: <ambient cue / music / voiceover if requested>
Text (verbatim): "<exact text>"
Dialogue:
<dialogue>
- Speaker: "Short line."
</dialogue>
Constraints: <must keep/must avoid>
Avoid: <negative constraints>
Augmentation rules:
--no-augment to avoid the tool re-wrapping it.Use case: product teaser
Primary request: a close-up of a matte black camera on a pedestal
Action: slow 30-degree orbit over 4 seconds
Camera: 85mm, shallow depth of field, gentle handheld drift
Lighting/mood: soft key light, subtle rim, premium studio feel
Constraints: no logos, no text
Primary request: same shot and framing, switch palette to teal/sand/rust with warmer backlight
Constraints: keep the subject and camera move unchanged
Primary request: Mossy, a moss-covered teapot mascot, hurries through a lantern-lit market at dusk
Camera: cinematic tracking shot, 35mm, shoulder height
Lighting/mood: warm dusk practicals, soft haze
Constraints: keep Mossy’s silhouette and moss texture consistent across the shot
edit for targeted changes and extend for timeline continuation.Use these modules when the request is for a specific artifact. They provide targeted templates and defaults.
references/cinematic-shots.mdreferences/social-ads.mdreferences/cli.mdreferences/video-api.mdreferences/prompting.mdreferences/sample-prompts.mdreferences/troubleshooting.mdreferences/codex-network.mdreferences/cli.md: how to run create/edit/extend/create-character/poll/download/local-queue flows via scripts/sora.py.references/video-api.md: API-level knobs (models, sizes, duration, characters, edits, extensions, official Batch API).references/prompting.md: prompt structure, character continuity, editing, and extension guidance.references/sample-prompts.md: copy/paste prompt recipes (examples only; no extra theory).references/cinematic-shots.md: templates for filmic shots.references/social-ads.md: templates for short social ad beats.references/troubleshooting.md: common errors and fixes.references/codex-network.md: network/approval troubleshooting.