Generate and edit images using the Gemini API (Nano Banana). Use this skill when creating images from text prompts, editing existing images, applying style transfers, generating logos with text, creating stickers, product mockups, or any image generation/manipulation task. Supports text-to-image, image editing, multi-turn refinement, and composition from multiple reference images.
Generate and edit images using Google's Gemini API. Use when creating images from text prompts, editing existing images with natural language, applying styles, generating logos with text, or creating product mockups and landing page designs.
/plugin marketplace add unclecode/claude-code-tools/plugin install unclecode-cc-toolkit@unclecode-toolsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
scripts/compose_images.pyscripts/edit_image.pyscripts/gemini_images.pyscripts/generate_image.pyscripts/multi_turn_chat.pyGenerate and edit images using Google's Gemini API. The environment variable GEMINI_API_KEY must be set.
| Model | Alias | Resolution | Best For |
|---|---|---|---|
gemini-2.5-flash-image | Nano Banana | 1024px | Speed, high-volume tasks |
gemini-3-pro-image-preview | Nano Banana Pro | Up to 4K | Professional assets, complex instructions, text rendering |
python scripts/generate_image.py "A cat wearing a wizard hat" output.png
python scripts/edit_image.py input.png "Add a rainbow in the background" output.png
python scripts/multi_turn_chat.py
All image generation uses the generateContent endpoint with responseModalities: ["TEXT", "IMAGE"]:
import os
import base64
from google import genai
client = genai.Client(api_key=os.environ["GEMINI_API_KEY"])
response = client.models.generate_content(
model="gemini-2.5-flash-image",
contents=["Your prompt here"],
)
for part in response.parts:
if part.text:
print(part.text)
elif part.inline_data:
image = part.as_image()
image.save("output.png")
Control output with image_config:
from google.genai import types
response = client.models.generate_content(
model="gemini-3-pro-image-preview",
contents=[prompt],
config=types.GenerateContentConfig(
response_modalities=['TEXT', 'IMAGE'],
image_config=types.ImageConfig(
aspect_ratio="16:9", # 1:1, 2:3, 3:2, 3:4, 4:3, 4:5, 5:4, 9:16, 16:9, 21:9
image_size="2K" # 1K, 2K, 4K (Pro only for 4K)
),
)
)
Pass existing images with text prompts:
from PIL import Image
img = Image.open("input.png")
response = client.models.generate_content(
model="gemini-2.5-flash-image",
contents=["Add a sunset to this scene", img],
)
Use chat for iterative editing:
from google.genai import types
chat = client.chats.create(
model="gemini-2.5-flash-image",
config=types.GenerateContentConfig(response_modalities=['TEXT', 'IMAGE'])
)
response = chat.send_message("Create a logo for 'Acme Corp'")
# Save first image...
response = chat.send_message("Make the text bolder and add a blue gradient")
# Save refined image...
Include camera details: lens type, lighting, angle, mood.
"A photorealistic close-up portrait, 85mm lens, soft golden hour light, shallow depth of field"
Specify style explicitly:
"A kawaii-style sticker of a happy red panda, bold outlines, cel-shading, white background"
Be explicit about font style and placement. Use gemini-3-pro-image-preview for best results:
"Create a logo with text 'Daily Grind' in clean sans-serif, black and white, coffee bean motif"
Describe lighting setup and surface:
"Studio-lit product photo on polished concrete, three-point softbox setup, 45-degree angle"
Specify layout structure, color scheme, and target audience:
"Modern landing page hero section, gradient background from deep purple to blue, centered headline with CTA button, clean minimalist design, SaaS product"
"Landing page for fitness app, energetic layout with workout photos, bright orange and black color scheme, mobile-first design, prominent download buttons"
Describe overall aesthetic, navigation style, and content hierarchy:
"E-commerce homepage wireframe, grid layout for products, sticky navigation bar, warm earth tones, plenty of whitespace, professional photography style"
"Portfolio website for photographer, full-screen image galleries, dark mode interface, elegant serif typography, minimal UI elements to highlight work"
"Tech startup homepage, glassmorphism design trend, floating cards, neon accent colors on dark background, modern illustrations, hero section with product demo"
Generate images based on real-time data:
response = client.models.generate_content(
model="gemini-3-pro-image-preview",
contents=["Visualize today's weather in Tokyo as an infographic"],
config=types.GenerateContentConfig(
response_modalities=['TEXT', 'IMAGE'],
tools=[{"google_search": {}}]
)
)
Combine elements from multiple sources:
response = client.models.generate_content(
model="gemini-3-pro-image-preview",
contents=[
"Create a group photo of these people in an office",
Image.open("person1.png"),
Image.open("person2.png"),
Image.open("person3.png"),
],
)
curl -s -X POST \
"https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-image:generateContent" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"contents": [{"parts": [{"text": "A serene mountain landscape"}]}]
}' | jq -r '.candidates[0].content.parts[] | select(.inlineData) | .inlineData.data' | base64 --decode > output.png
responseModalities: ["IMAGE"]) won't work with Google Search groundingThis skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.