Use when the user wants to execute a saved prompt against the official fal-ai MCP server and save the resulting image or video into the project's visuals directory.
npx claudepluginhub danielrosehill/claude-code-plugins --plugin visual-communicationsThis skill uses the workspace's default tool permissions.
Execute a saved prompt against the **official fal-ai MCP server** (https://mcp.fal.ai/mcp), then save the output into the project's `visuals/` directory and update the prompt YAML.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Share bugs, ideas, or general feedback.
Execute a saved prompt against the official fal-ai MCP server (https://mcp.fal.ai/mcp), then save the output into the project's visuals/ directory and update the prompt YAML.
prompts/<slug>.yaml and wants to generate the visual.This skill targets fal's official remote MCP server. If it isn't connected, install it once:
claude mcp add --transport http fal-ai https://mcp.fal.ai/mcp \
--header "Authorization: Bearer $FAL_KEY"
(Source: https://fal.ai/docs/documentation/setting-up/mcp — auth is a bearer token in the header; no npm package or env-var-only config.)
If mcp__fal-ai__* tools are not exposed in the current session, stop and instruct the user to run the command above with their fal API key.
Discovery:
mcp__fal-ai__search_models — find models by keyword/categorymcp__fal-ai__get_model_schema — full input/output schema for a modelmcp__fal-ai__get_pricing — cost of running a model before you call itmcp__fal-ai__search_docs — search fal docsmcp__fal-ai__recommend_model — describe goal, get model suggestionsExecution:
mcp__fal-ai__run_model — run a model and wait for the result (images, video, audio)mcp__fal-ai__submit_job — submit a long-running job, return a request IDmcp__fal-ai__check_job — poll status / fetch result / cancelUtility:
mcp__fal-ai__upload_file — upload a local file (or URL) to fal's CDN; returns a cdn_url usable as image_url/audio_url inputOutputs from run_model / check_job are returned as CDN URLs, not local paths — this skill is responsible for downloading them.
prompts/<slug>.yaml to executerecommend_model or search_modelsupload_file firstVerify the MCP is connected. Check that mcp__fal-ai__* tools are available. If not, direct the user to the install command above and stop.
Load project + prompt. Read ${CLAUDE_PLUGIN_DATA:-$HOME/.local/share/claude-plugins}/visual-communications/projects/<project>/project.yaml and prompts/<slug>.yaml.
Pick the model (if not set). If the prompt YAML lacks a concrete fal model id (e.g. fal-ai/flux/dev, fal-ai/flux-pro, fal-ai/minimax/video-01), call recommend_model with a one-line description, or search_models with a keyword. Confirm the choice with the user if there's ambiguity.
Sanity-check inputs and price. Call get_model_schema to confirm parameter names (image models commonly take prompt, image_size / aspect_ratio, num_images, negative_prompt; video models often take prompt, image_url, duration). Optionally call get_pricing and surface the cost to the user before running.
Upload references if needed. For image-to-image or video-from-image, call upload_file on the local reference and use the returned cdn_url as image_url in the model input.
Run the model.
mcp__fal-ai__run_model (synchronous; preferred for short jobs)mcp__fal-ai__submit_job → poll with mcp__fal-ai__check_job until status is completed. Use submit_job whenever the model's typical latency exceeds ~60s.Download the outputs. Parse output URLs from the response (typical fields: images[].url, video.url, audio.url). For each, download to:
${CLAUDE_PLUGIN_DATA:-$HOME/.local/share/claude-plugins}/visual-communications/projects/<project>/visuals/<slug>-vN.<ext>
Use curl -L -o <dest> <url> (or wget). Auto-increment the vN suffix; never overwrite existing files without confirmation.
Update the prompt YAML. Append every saved file to generated_files::
generated_files:
- ../visuals/hero-vN.png
model_used: fal-ai/flux/dev
generated_at: 2026-04-30T22:15:00+03:00
Report. Show the saved paths, the model id used, and the price (if get_pricing was called). Offer next steps: regenerate with tweaks, run a new prompt, or upscale an output (call recommend_model with "upscale" if asked).
fal-ai/flux/dev is a good default for stills; fal-ai/flux-pro/v1.1 for higher fidelity.submit_job/check_job.square_hd, landscape_16_9, etc.) — get_model_schema is the source of truth.negative_prompt; check the schema before passing it.get_model_schema / search_models first.visuals/ without explicit user confirmation.