From nano-banana
Use when users request image generation, AI art creation, image editing with Gemini models, need help crafting prompts, or want brand-styled imagery. Handles both direct generation and interactive prompt design.
npx claudepluginhub pigfoot/claude-code-hubs --plugin nano-bananaThis skill is limited to using the following tools:
Unified image generation workflow using a fixed Python script with JSON configuration. Eliminates AI hallucinations by
Generates and edits AI images using Google Gemini Nano Banana models. Orchestrates text-to-image, image editing, batch workflows, presets, and creative sessions via /banana or auto-triggers on image requests.
Generates AI images from text prompts, edits images, and composes from multiple references using Gemini models. Supports t2i, i2i, product mockups, and stickers.
Generates optimized prompts for Gemini 2.5 Flash Image (Nano Banana) using best practices for photorealistic shots, art styles, and multi-turn editing workflows.
Share bugs, ideas, or general feedback.
Unified image generation workflow using a fixed Python script with JSON configuration. Eliminates AI hallucinations by avoiding dynamic code generation.
Supports two modes:
model = os.environ.get("NANO_BANANA_MODEL") or "gemini-3-pro-image-preview"
# Select API based on model name
if "imagen" in model.lower():
use_imagen_api = True # → generate_images()
else:
use_imagen_api = False # → generate_content()
These are NOT interchangeable. Using the wrong API will cause errors.
| Gemini Image | Imagen | |
|---|---|---|
| API Method | generate_content() | generate_images() |
| Config Type | GenerateContentConfig | GenerateImagesConfig |
| Models | gemini-*-image* | imagen-* |
| Prompt Format | contents=[prompt] (array) | prompt=prompt (string) |
| Response | response.parts | response.generated_images |
| Special Config | response_modalities=['IMAGE'] | Not used |
# These trigger Imagen API:
"imagen-4.0-generate-001" → generate_images()
"custom-imagen-v2" → generate_images()
# These trigger Gemini API:
"gemini-3-pro-image-preview" → generate_content()
"gemini-2.5-flash-image" → generate_content()
"custom-gemini-image" → generate_content()
If you find yourself writing these, STOP - you are using the wrong API:
| ❌ Wrong (Does NOT exist) | ✅ Correct |
|---|---|
types.ImageGenerationConfig | Use GenerateContentConfig (Gemini) or GenerateImagesConfig (Imagen) |
generate_images() + Gemini model | Use generate_content() for Gemini models |
generate_content() + Imagen model | Use generate_images() for Imagen models |
response_modalities in Imagen | Only use with Gemini's GenerateContentConfig |
Rule: If NANO_BANANA_MODEL is set, use it EXACTLY as-is.
❌ WRONG - Do NOT do this:
model = os.environ.get("NANO_BANANA_MODEL", "gemini-3-pro-image")
if not model.endswith("-preview"):
model = f"{model}-preview" # ❌ NEVER modify user's model name
✅ CORRECT:
model = os.environ.get("NANO_BANANA_MODEL")
if not model:
# Only choose default when NANO_BANANA_MODEL is NOT set
model = "gemini-3-pro-image-preview"
# Use model EXACTLY as-is - do NOT add suffixes or change names
gemini-3-pro-image without -preview)"seed": 42"seed": 392664860"seed": 123"temperature": 0.5"temperature": 1.5"temperature": 0.5 (suggest conservative value)"temperature": 1.5 (suggest exploratory value)"temperature": 1.5"temperature": 0.8User: "Generate a robot image, seed 42, temperature 0.8"
→ Config: {"slides": [{"number": 1, "prompt": "robot", "seed": 42, "temperature": 0.8}]}
User: "Regenerate that image with seed 392664860"
→ Config: {"slides": [{"number": 1, "prompt": "<previous_prompt>", "seed": 392664860}]}
User: "Create 3 slides with same seed 100"
→ Config: {"seed": 100, "slides": [{...}, {...}, {...}]}
Detect: style: "trendlife", style: "notebooklm", or natural language ("use trendlife style", "notebooklm style")
Priority: Inline spec → Ask in Interactive Mode → No style (Direct Mode default)
When style: "trendlife" is detected:
logo_overlay.overlay_logo()Use when you need custom logo positioning:
IMPORTANT: This script must be run with uv run to ensure dependencies are available.
#!/usr/bin/env python3
# /// script
# dependencies = ["pillow"]
# ///
# Run with: uv run --managed-python your_script.py
# After image generation, before final output
from pathlib import Path
import sys
# Import logo overlay module
sys.path.insert(0, str(Path(__file__).parent))
from logo_overlay import overlay_logo, detect_layout_type
# Detect layout type (or specify manually)
layout_type = detect_layout_type(prompt, slide_number=1)
# Or override: layout_type = 'title' # 'title', 'content', 'divider', 'end'
# Logo path
logo_path = Path(__file__).parent / 'assets/logos/trendlife-2026-logo-light.png'
# Apply logo overlay
output_with_logo = output_path.with_stem(output_path.stem + '_with_logo')
overlay_logo(
background_path=output_path,
logo_path=logo_path,
output_path=output_with_logo,
layout_type=layout_type,
opacity=1.0 # Optional: 0.0-1.0
)
# Replace original with logo version
output_with_logo.replace(output_path)
Explicit: style: "trendlife"
| Model Name Contains | API to Use | Config Type |
|---|---|---|
imagen | generate_images() | GenerateImagesConfig |
| Anything else | generate_content() | GenerateContentConfig |
All image generation uses the same fixed Python script with JSON config:
Write tool with path: {temp_dir}/nano-banana-config-{timestamp}.jsonuv run --managed-python scripts/generate_images.py --config <temp_config_path>
cd command)cd scripts && uv run --managed-python generate_images.py ... ❌uv run --managed-python scripts/generate_images.py ... ✅scripts/generate_images.py for the script path (relative to skill directory){
"slides": [{"number": 1, "prompt": "...", "style": "trendlife"}],
"output_dir": "./001-feature-name/" // MUST use NNN-short-name format
}
{
"slides": [
{
"number": 1,
"prompt": "...",
"style": "trendlife",
"layout": "featured", // Optional: "featured" or "content" (auto-detect if omitted)
"temperature": 0.8, // Optional: 0.0-2.0 per-slide override
"seed": 42 // Optional: integer for reproducible generation
}
],
"output_dir": "./001-feature-name/", // MUST use NNN-short-name format
"format": "webp", // Optional: webp (default, RECOMMENDED), png, jpg
"quality": 90, // Optional: 1-100 (default: 90)
"temperature": 1.0, // Optional: 0.0-2.0 global default (default: 1.0)
"seed": 12345 // Optional: integer global default (default: auto-generate)
}
Purpose: Control logo integration strategy for TrendLife brand slides
Valid values:
"featured" - For title slides, dividers, and closing slides
"content" - For content/information slides
Omitted or null - Auto-detection (backwards compatibility)
When to use each:
// Title/cover slides
{"number": 1, "prompt": "Product Launch 2026", "style": "trendlife", "layout": "featured"}
// Section dividers
{"number": 3, "prompt": "Part 2: Technical Details", "style": "trendlife", "layout": "featured"}
// Closing slides
{"number": 6, "prompt": "Thank You", "style": "trendlife", "layout": "featured"}
// Content slides (everything else)
{"number": 2, "prompt": "Key features and benefits", "style": "trendlife", "layout": "content"}
{"number": 4, "prompt": "Performance metrics", "style": "trendlife", "layout": "content"}
IMPORTANT: When generating JSON configs for TrendLife presentations:
"layout": "featured""layout": "content"layout explicitly for TrendLife slides (don't rely on auto-detection)Purpose: Enable reproducible image generation
// Global seed (all slides use same seed)
{"seed": 42, "slides": [...]}
// Per-slide seed (each slide has different seed)
{"slides": [
{"number": 1, "prompt": "...", "seed": 42},
{"number": 2, "prompt": "...", "seed": 123}
]}
// No seed specified (auto-generate and record)
{"slides": [...]} // Results JSON will contain: "seed": 1738051234
{output_dir}/generation-results.json{
"outputs": [
{"slide": 1, "path": "slide-01.png", "seed": 392664860, "temperature": 1.0}
]
}
Purpose: Control randomness in generation (0.0-2.0)
Official guidance: Gemini 3 recommends keeping default value 1.0
output_dir MUST be relative path with NNN-short-name format: ./001-feature-name/
./001-ai-safety/, ./002-threat-detection/, ./003-user-onboarding/./slides/ is WRONG)model field (use NANO_BANANA_MODEL env var)# Linux/macOS: Write to /tmp/
Write tool: /tmp/nano-banana-config-1234567890.json
# Windows: Write to %TEMP%
Write tool: C:/Users/<user>/AppData/Local/Temp/nano-banana-config-1234567890.json
Complete workflow details: See references/batch-generation.md
| Step | Action |
|---|---|
| 1. Gather | Check for reference images, style specs |
| 2. Clarify | Ask 2-4 questions about output type, subject, style |
| 3. Select Technique | Choose from 16+ patterns (see references/guide.md) |
| 4. Generate Prompt | Apply technique, brand style, aspect ratio |
| 5. Present | Show prompt with explanation and variations |
| 6. Execute | Generate image with crafted prompt |
When to use: User requests prompt help ("help me craft", "improve my prompt") or prompt is too vague (<5 words).
style: "trend")AskUserQuestion: Output type? Subject? Style preference?references/guide.mdNotebookLM style (style: "notebooklm"):
references/slide-deck-styles.md for complete specsTrend Micro style (style: "trend"):
references/brand-styles.md for complete specsComplete techniques and examples: See references/guide.md
| Error | Quick Fix |
|---|---|
GEMINI_API_KEY not set | export GEMINI_API_KEY="your-key" |
Model not found | Check exact model name, use -preview suffix if needed |
| Wrong API used | Check CRITICAL section: Gemini vs Imagen |
ModuleNotFoundError | Verify # dependencies = ["google-genai", "pillow"] |
| No image generated | Check response.parts (Gemini) or response.generated_images (Imagen) |
Invalid aspect ratio | Use exact strings: "16:9", "1:1", "9:16" (with quotes) |
echo $GEMINI_API_KEY"A red circle"print(response.parts) or print(response.generated_images)| Mistake | Fix |
|---|---|
Using types.ImageGenerationConfig | Does NOT exist - use GenerateContentConfig or GenerateImagesConfig |
Using generate_images() with Gemini | Use generate_content() for Gemini models |
Using generate_content() with Imagen | Use generate_images() for Imagen models |
Overriding NANO_BANANA_MODEL when set | Use model EXACTLY as-is - don't add suffixes |
Using google-generativeai (old library) | Use google-genai (new library) |
| Using text models for image gen | Use image models only (gemini-*-image* or imagen-*) |
| Saving to flat files | Use NNN-short-name/ directories |
| Using PIL to draw/edit | Use Gemini/Imagen API with image in contents |
User: "Generate a modern office interior"
Assistant actions:
1. Create config WITHOUT seed (auto-generate):
{
"slides": [{"number": 1, "prompt": "Modern office interior with natural lighting"}],
"output_dir": "./001-office-design/"
}
2. Execute: uv run --managed-python scripts/generate_images.py --config {temp_config}
3. Read results: ./001-office-design/generation-results.json
→ {"outputs": [{"slide": 1, "path": "slide-01.png", "seed": 392664860}]}
4. Report to user: "Generated at ./001-office-design/slide-01.png (seed: 392664860)"
User: "I love that office image! Regenerate it with seed 392664860"
Assistant actions:
1. Create config WITH seed:
{
"slides": [{"number": 1, "prompt": "Modern office interior with natural lighting", "seed": 392664860}],
"output_dir": "./002-office-design-v2/"
}
2. Execute script
3. Result: Visually identical image
User: "Generate 3 variations of a robot holding flowers, use different temperatures"
Assistant actions:
1. Create config with per-slide temperatures:
{
"seed": 42, // Same seed for comparison
"slides": [
{"number": 1, "prompt": "A cute robot holding flowers", "temperature": 0.5},
{"number": 2, "prompt": "A cute robot holding flowers", "temperature": 1.0},
{"number": 3, "prompt": "A cute robot holding flowers", "temperature": 1.5}
],
"output_dir": "./003-robot-variations/"
}
2. Execute script
3. Result: 3 different compositions (temperature effect)
User: "Create 5 presentation slides with TrendLife style, use same seed"
Assistant actions:
1. Create config with global seed and appropriate layout for each slide:
{
"seed": 12345,
"slides": [
{"number": 1, "prompt": "AI Safety: Building Secure Systems", "style": "trendlife", "layout": "featured"}, // Title
{"number": 2, "prompt": "Key features overview", "style": "trendlife", "layout": "content"}, // Content
{"number": 3, "prompt": "Technical architecture", "style": "trendlife", "layout": "content"}, // Content
{"number": 4, "prompt": "Use cases and benefits", "style": "trendlife", "layout": "content"}, // Content
{"number": 5, "prompt": "Thank You - Contact Us", "style": "trendlife", "layout": "featured"} // Closing
],
"output_dir": "./004-ai-safety-deck/"
}
2. Execute in background (5+ slides)
3. Monitor progress file
4. Results:
- Slide 1 (featured): Logo integrated naturally by AI into title design
- Slides 2-4 (content): Logo overlaid in bottom-right corner
- Slide 5 (featured): Logo integrated naturally by AI into closing design
- All slides use seed 12345 for visual consistency
If script execution fails, check these common issues:
Most common issue: User doesn't have Git LFS installed
Logo files are managed by Git LFS - without it, you get text pointer files instead of images
Solution:
# Install Git LFS (once per machine)
git lfs install
# Pull actual files (in the plugin directory)
git lfs pull
Git LFS install: https://git-lfs.com/
After installing, re-run the image generation command
User needs to install uv: https://docs.astral.sh/uv/
Solution: Ask user to run:
curl -LsSf https://astral.sh/uv/install.sh | sh
--managed-python downloads Python 3.14+ automaticallyuv python install 3.14~/.local/share/uv/)uv run via PEP 723 metadatauv run --managed-python scripts/generate_images.pyreferences/guide.md (thinking, search grounding, 16+ prompting techniques)references/brand-styles.md (Trend Micro specs)references/slide-deck-styles.md (NotebookLM aesthetic, infographics, data viz)../EXPERIMENT_RESULTS.md (temperature & seed testing results)