From asset-generation
Generates text-to-image and image edits via fal.ai, compares models like Nano Banana and GPT Image, runs queue-based workflows, and tracks costs for experiments.
npx claudepluginhub kyh/vibedgamesThis skill uses the workspace's default tool permissions.
Use this skill when the user wants to generate or edit images through `fal.ai`, compare multiple marketplace image models, or build repeatable experiment workflows with prompts, references, outputs, and costs tracked in a consistent way.
assets/model-presets.jsonassets/prompt-profiles/pirate-isometric-anchor-se-v1.txtassets/prompt-profiles/pirate-isometric-cardinals-v1.txtassets/prompt-profiles/pirate-isometric-diagonals-v1.txtreferences/fal-image-models.mdreferences/fal-platform-notes.mdreferences/fal-queue-and-inference.mdscripts/_fal_common.pyscripts/fal_image_experiment_matrix.pyscripts/fal_platform_models.pyscripts/fal_queue_image_run.pyGenerates image-to-video using fal.ai models (Seedance-pro-i2v, Kling-v3-pro-i2v, Hailuo); runs queue-based comparisons, tracks pricing/usage, and logs prompts/costs for experiments.
Generates images and videos using fal.ai AI models with queue support for long-running tasks. Use for text-to-image, image-to-video, model search.
Optimizes fal.ai AI inference for performance and cost via parallel batching, queue vs run, streaming/WebSocket, model comparison, image/step tuning, webhooks/caching, serverless scaling.
Share bugs, ideas, or general feedback.
Use this skill when the user wants to generate or edit images through fal.ai, compare multiple marketplace image models, or build repeatable experiment workflows with prompts, references, outputs, and costs tracked in a consistent way.
fal gives one platform surface for many image models, but the useful controls still differ by model family. The right abstraction is:
Before generating, ask:
Core principles:
grok-imagine-image-t2igrok-imagine-image-editnano-banana-2-t2inano-banana-2-editnano-banana-pro-t2inano-banana-pro-editgpt-image-1.5-t2igpt-image-1.5-editFor tracked image jobs, this skill uses fal's queue API:
POST https://queue.fal.run/{endpoint_id}GET https://queue.fal.run/{endpoint_id}/requests/{request_id}/statusGET https://queue.fal.run/{endpoint_id}/requests/{request_id}Authentication uses:
Authorization: Key $FAL_KEYFAL_API_KEY is also accepted by the bundled scriptsImportant platform headers for repeatable comparison runs:
X-Fal-Store-IO: 1x-app-fal-disable-fallback: trueThe runner also captures response headers such as:
x-fal-request-idx-fal-billable-unitsThe official fal-client SDK is valid and supported, but this repo's main requirement is portability inside a Codex skill. The scripts therefore keep a deterministic raw-queue path and also use fal-client automatically when it is available.
In this repo's retained live image runs, uv run --with fal-client python3 ... was the dependable path for some endpoints.
Prompt like art direction, not like marketing copy:
For edit comparisons:
Background handling matters more than the prompt wording suggests:
gpt-image-1.5 is the safest option here when you genuinely need transparent output.nano-banana-2 and nano-banana-pro should be treated as chroma-key models for this workflow, not transparent-background models.#00FF00, with no gradients, no cast shadows on the background, no texture, and no green spill on the subject.#FF00FF sits too close to the warm red/purple bandana family and is more likely to contaminate edge colors.For text-to-image comparisons:
Do not overload first comparison runs with long prompt stacks. The first job is to test prompt adherence, identity preservation, and edit usefulness.
scripts/fal_queue_image_run.py
scripts/fal_platform_models.py
scripts/fal_image_experiment_matrix.py
Machine-readable tracking:
experiments/fal-image/ledger.jsonlexperiments/fal-image/ledger.csvexperiments/fal-image/<timestamp>-<slug>/batch.jsonHuman-readable tracking:
prompts/<timestamp>-...-prompts.mdlearnings/<timestamp>-...-learnings.mdGenerated images should still live under the appropriate public/assets/.../concepts/... path for the asset family being tested.
❌ Anti-pattern: flattening all image models into one fake prompt schema Why bad: you hide the controls that actually affect quality and cost. Better: use shared runner behavior plus explicit per-model presets and overrides.
❌ Anti-pattern: treating edit and generate as the same task Why bad: edit runs depend on reference discipline and preservation constraints that text-to-image runs do not. Better: keep separate presets and separate experiment configs for generation and editing.
❌ Anti-pattern: recording only prompts and final PNGs Why bad: you cannot audit request IDs, retries, or cost later. Better: always save raw JSON, normalized manifests, and ledger rows.
❌ Anti-pattern: comparing models with hidden fallback routing
Why bad: you may think you tested one endpoint but actually hit another route.
Better: set x-app-fal-disable-fallback: true on strict comparison runs.
❌ Anti-pattern: stuffing many reference images into every edit Why bad: it weakens edit control and makes failure analysis harder. Better: pass only the minimum reference images the edit actually needs.
❌ Anti-pattern: asking Banana-family models for transparency and trusting the result Why bad: you may get a faux-transparent dark backdrop instead of a clean extraction surface. Better: use an explicit chroma-key background and key it out later.
references/fal-platform-notes.mdreferences/fal-queue-and-inference.mdreferences/fal-image-models.mdassets/model-presets.jsonA good fal image workflow is not just "can it render." It is: