From asset-generation
Generates images with OpenAI gpt-image-1.5 including transparent assets, icons, concept art, style-controlled illustrations; designs prompts and calls Images API.
npx claudepluginhub kyh/vibedgamesThis skill uses the workspace's default tool permissions.
Use this skill when the user wants actual image generation with OpenAI `gpt-image-1.5`, or when the task requires strong prompting and parameter selection for that model.
Generates images from text prompts and edits reference images using OpenAI gpt-image-2 via CLI. Supports restyling, combining references, inpainting with PNG masks, and dense typography. Outputs PNG/JPEG/WebP to disk.
Generates and edits images using OpenAI gpt-image-2 via CLI, Claude Code skill, and prompt gallery with 162 curated prompts for UI mockups, research figures, photography.
Generates or edits raster images using AI for bitmap visuals like photos, illustrations, textures, sprites, mockups in projects. Uses built-in image_gen tool; CLI fallback requires OPENAI_API_KEY. Avoid vector/SVG/code assets.
Share bugs, ideas, or general feedback.
Use this skill when the user wants actual image generation with OpenAI gpt-image-1.5, or when the task requires strong prompting and parameter selection for that model.
Image generation is not "write a pretty prompt and hope." The job is to convert a vague art request into a concrete production request with the right subject, composition, style, constraints, and output settings.
Before generating, ask:
Core principles:
OpenAI documents gpt-image-1.5 as the latest GPT Image model with text and image input, and image and text output. The Images API supports gpt-image-1.5 for generation, with png, webp, or jpeg output and documented size/quality/background controls. See references/openai-gpt-image-1-5.md.
gpt-image-1.5.size: 1024x1024, 1024x1536, 1536x1024, or autoquality: low, medium, high, or autooutput_format: png, webp, or jpegbackground: use transparent only with png or webpOPENAI_API_KEY is available, use scripts/gpt_image_generate.py.Prefer compact, production-oriented prompts:
Create a side-view fantasy inn sign for a 2D platformer. Carved wood, brass brackets, hand-painted fox emblem, warm lantern glow, readable silhouette, transparent background, no mockup, no text, centered composition.
For style-sensitive work, add one clear visual direction instead of five contradictory ones:
1990s SNES-era platformer prop with restrained palette and crisp pixel clustershyper realistic painterly low poly anime cinematic pixel art watercolorFor iteration, change one axis at a time:
The OpenAI cookbook guidance for GPT Image 1.5 strengthens the prompt strategy above:
image 1 = character silhouette, image 2 = color palette, image 3 = environment mood.For low-resolution sprite animation edits, the model needs more than "same character" language. Tiny pixel characters drift easily in:
For sprite-strip work, use this stricter pattern:
Image 1 = identity anchorImage 2 = pose/layout/motion anchorFor states that should begin from idle, there are two different tools:
Practical rule:
Example structure:
Create a portrait 16-bit pixel-art gameplay screenshot.
Subject: a pirate hero climbing a rope.
Environment: sea cave opening with dock platforms and shallow surf below.
Composition: side-view, centered hero, upward route clearly readable, HUD at top only.
Style: authentic 16-bit pixel art, 256x384 internal resolution, 4x nearest-neighbor upscale.
Lighting/color: bright coastal blues with warm stone and wood tones.
Constraints: visible pixels, limited palette, stepped shading, no glossy rendering, no collage, no poster framing.
OPENAI_API_KEY=... \
python3 .claude/skills/gpt-image-1-5/scripts/gpt_image_generate.py \
--prompt "Isometric potion shop icon, transparent background, polished game asset" \
--out-dir tmp/potion_shop --quality high --size 1024x1024 --output-format png
Useful flags:
--background transparent--n 1--filename-prefix hero--user some-trace-idThe script calls POST /v1/images/generations, decodes b64_json, and writes image files to disk.
OPENAI_API_KEY=... \
python3 .claude/skills/gpt-image-1-5/scripts/gpt_image_edit.py \
--image input.png \
--prompt "Keep the same sprite, raise the arm slightly" \
--out-dir tmp/edit
Multi-image edits with fidelity control:
OPENAI_API_KEY=... \
python3 .claude/skills/gpt-image-1-5/scripts/gpt_image_edit.py \
--image identity.png \
--image motion-guide.png \
--input-fidelity high \
--prompt "Use image 1 for identity and image 2 for pose" \
--out-dir tmp/edit
The edit script calls POST /v1/images/edits with multipart form data.
❌ Anti-pattern: claiming success before generation Why bad: the user asked for images, not a hypothetical prompt. Better: run the script if credentials are available, or clearly say that API access is missing.
❌ Anti-pattern: contradictory prompt stacks Why bad: the model gets weaker guidance, not stronger guidance. Better: choose one subject, one composition, and one primary style direction.
❌ Anti-pattern: transparent background with jpeg
Why bad: OpenAI documents transparency for png and webp, not jpeg.
Better: use png or webp when transparency matters.
❌ Anti-pattern: pretending sprite sheets are guaranteed Why bad: image models are better at generating single assets or illustrations than deterministic sheet layouts. Better: ask for one asset, one pose, or one state per call unless the user explicitly wants experimentation.
❌ Anti-pattern: defaulting every request to highest quality
Why bad: it slows iteration and can waste cost on early ideation.
Better: use low or medium while exploring, then raise quality for final outputs.
❌ Anti-pattern: treating "same character" as enough for tiny sprites Why bad: the model may preserve the idea of the character while still changing scale, orientation, or silhouette. Better: restate exact invariants such as side view, head size, outline thickness, palette family, and apparent scale.
❌ Anti-pattern: replacing frame 1 after the fact when the real problem is sequence mismatch Why bad: a locked first frame can make the animation more jarring if frame 2 was generated as a different-looking character. Better: use hard-lock only when the generated strip already matches well, or move to a masked/protected frame-1 edit.
❌ Anti-pattern: using repeated copies of the seed sprite as a stronger identity anchor by default Why bad: for tiny sprite work, repeating the same seed across every slot can still drift into a bad reinterpretation instead of preserving the intended character. Better: test whether a single seeded slot or a surgical retouch of the current best strip produces better continuity.
IMPORTANT: Do not converge on one house style for every request.
references/openai-gpt-image-1-5.mdscripts/gpt_image_generate.pyscripts/gpt_image_edit.pyThis skill should make image generation operational, not theoretical.
Turn the request into a precise prompt, choose the settings intentionally, run the API when possible, and report the real output path back to the user.