From monet
Generate architecture-focused images using Google Gemini with structured prompts and style references. Use when users mention image generation, architectural visualization, Gemini images, /monet, or want to create visual content for architecture projects.
npx claudepluginhub bauhaus-infau/infau-skill-base --plugin monetThis skill uses the workspace's default tool permissions.
Generate architectural visualizations using Google Gemini. Each project has a `monet/` directory with context, style references, structured prompts (JSON or narrative), and auto-versioned results.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Generate architectural visualizations using Google Gemini. Each project has a monet/ directory with context, style references, structured prompts (JSON or narrative), and auto-versioned results.
/monetBefore any generation, verify:
google-genai, Pillow, python-dotenv.env file exists in the working directory (or parent) with GOOGLE_API_KEY or GEMINI_API_KEYIf dependencies are missing, run:
pip install google-genai Pillow python-dotenv
If no API key, tell the user:
You need a Gemini API key. Get one at https://aistudio.google.com/apikey Then create a
.envfile with:GOOGLE_API_KEY=your-key-here
Check for an existing monet/ directory in the current working directory.
If monet/ exists:
monet/context/ — this is the project's knowledge base. It contains source documents the user has added (architecture specs, research notes, briefs, technical descriptions, visual identity guides) that inform prompt writing.monet/references/types/ — understand what kind of images this project producesmonet/references/styles/ — understand the target aestheticmonet/starters/ for base imagesIf monet/ does not exist:
Create the monet/ directory with standard structure:
mkdir -p monet/{context,references/types,references/styles,starters,prompts,results}
Then help the user set up:
Context (knowledge base):
monet/context/ — architecture specs, research notes, briefs, technical descriptions, anything Claude should read to write informed promptsRemind the user to:
monet/context/ for Claude to read when writing prompts (specs, briefs, visual identity notes, research)monet/references/styles/monet/references/types/ (optional)monet/starters/ (optional)monet/context/ — absorb the source material the user has provided (specs, notes, briefs). Use this knowledge to write accurate, informed prompts.monet/prompts/ (if any) for tone and detail calibrationmonet/starters/, ask if they want to transform oneDrafting the prompt:
Load references/json-schemas.md for template structures.
JSON prompts (preferred for precise iteration):
scene_type, composition, camera, lighting, constraintsNarrative prompts (when user prefers prose):
Save as monet/prompts/<id>.md with this structure:
# Image Prompt — [Title]
## Concept
[What the image represents and why — 2-3 sentences]
## Prompt
[JSON object or narrative text]
Show draft to user for approval before saving.
Generate:
python <skill-path>/scripts/generate.py <cwd>/monet <prompt_id>
Where <skill-path> is the skill's installation directory (resolve from the skill's location).
After generation:
If user wants changes:
Results auto-version: {prompt_id}_v1.png, v2.png, etc. Previous versions are never overwritten.
| Script | Purpose | Location |
|---|---|---|
scripts/generate.py | Image generation via Gemini API | Run with Python 3.7+ |
Usage:
python scripts/generate.py <project-dir> [prompt_ids...] [options]
Options:
--list — List prompts and version counts--model MODEL — Override model (default: gemini-3.1-flash-image-preview)--aspect-ratio RATIO — Override (default: 16:9)--image-size SIZE — Override (default: 2K)--style FILE — Use specific style reference from references/styles/--starter FILE — Use specific starter image from starters/Never generate all prompts at once. Generate one, show to user, get feedback, iterate. Each image deserves attention.
All changes accumulate in the JSON prompt. Every generation is fresh from clean inputs. Never chain through generated images.
When drafting prompts, show the draft and ask for approval. When the user says "change the lighting," confirm which fields before editing.
Read the user's source documents in context/ before writing prompts. The better you understand the subject, the better the prompt.
references/workflow.md for iteration methodology, conventions, and CLI detailsreferences/json-schemas.md for JSON prompt templates (5 scene types)examples/prompt-examples.md for JSON and narrative prompt examplesThis skill should NOT: