From idna
Bootstrap and run an evolutionary idea selector — explore diverse variants, pick a winner, converge through amplify/blend/refine mutations until a final candidate emerges. Self-driving: browser picks → Claude generates prompts → imageCLI/voiceCLI renders → browser auto-advances. Triggers: "forge ideas" | "evolutionary selector" | "explore and converge" | "selection forge" | "pick from variants" | "refine to a winner".
npx claudepluginhub roxabi/roxabi-plugins --plugin idnaThis skill is limited to using the following tools:
Bootstrap a self-driving selection session for any creative asset: avatar images, voice styles, writing tones, logo concepts, etc.
Teaches design thinking fundamentals via 5-phase Design Partner process, ideation techniques, and AI image generation for UI mockups, prototypes, and concept art. Ideal for early product ideation.
Provides creative direction for projects like websites, apps, brands, and dashboards by defining purpose, story, emotional experience, and bespoke AI-generated visuals before implementation.
Provides prompting techniques for AI image generation and editing models on Replicate, including natural language use, specificity, detailed descriptions, and photographic terms. Use when writing image prompts or building gen features.
Share bugs, ideas, or general feedback.
Bootstrap a self-driving selection session for any creative asset: avatar images, voice styles, writing tones, logo concepts, etc.
Flow: Explore (N diverse variants) → pick winner → Converge (amplify / blend / refine) → repeat until Finalize.
Stack:
session.json — state machine (current round, winner, variants, phase, status)forge.html — dynamic browser picker (polls the server, auto-advances rounds)forge_server.py — local HTTP server (port 8082) that calls Claude API for next-round prompts and triggers generationgenerate_round.py — 2-phase image generation script (imageCLI / FLUX.2-klein)Read before implementing:
${CLAUDE_PLUGIN_ROOT}/references/idna-session-schema.md — session.json structure + state machine
${CLAUDE_PLUGIN_ROOT}/references/idna_server.py — forge_server.py reference implementation
${CLAUDE_PLUGIN_ROOT}/references/idna-template.html — forge.html reference implementation
${CLAUDE_PLUGIN_ROOT}/references/idna-generate-round.py — 2-phase image gen reference (imageCLI)
Identify:
~/.roxabi/forge/<project>/<subject>/)Ask the user for any missing info before proceeding.
Create the output directory and the three runtime files.
Create <output_dir>/session.json from the schema in references/idna-session-schema.md.
Initial state:
phase: "explore", round: 0, status: "ready"winner: null, runner_up: null, cycle_winners: []identity: the fixed description provided by the userrounds[0]: 4 variants for the explore phase — generate diverse params covering different poles:
For each variant, set params (expression, lighting, framing, mood for images; tone, pace, affect for voice; etc.) and compose the full prompt string by combining identity + params.
Set seeds: round 0 → seeds 0–3, round N → seeds N×100 to N×100+2.
Copy references/idna_server.py and adapt:
FORGE_DIR to the output directoryIMAGECLI_PROJECT to the appropriate generator project path (imageCLI for images, voiceCLI for voice)GENERATE_SCRIPT to the appropriate round generator scriptMUTATION_SYSTEM / MUTATION_USER_TMPL to match the artifact type (portrait prompts for images, voice style descriptions for voice, etc.)Copy references/idna-template.html as-is — it reads all state from the server dynamically, no modifications needed unless the artifact type requires a different display (e.g. audio player instead of image grid).
Copy references/idna-generate-round.py to the output directory. No modifications needed — it reads job files from round_N/prompts/ and writes PNGs to round_N/.
Create <output_dir>/round_0/prompts/ and write one JSON job file per variant:
{"id": "v0", "label": "V0", "seed": 0, "width": 768, "height": 1024, "prompt": "..."}
Then run generation:
cd <output_dir>
uv run --project ~/projects/imageCLI python generate_round.py round_0 --steps 28
Create audio samples using voiceCLI with the variant params.
Render the variants as text files or show them inline in forge.html.
Add to supervisord (~/projects/lyra-stack/conf.d/) with autostart=false:
[program:forge-<subject>]
command=uv run <output_dir>/forge_server.py
directory=<output_dir>
environment=HOME="%(ENV_HOME)s",PATH="%(ENV_HOME)s/.local/bin:%(ENV_PATH)s"
autostart=false
autorestart=true
...
Then start:
supervisorctl reread && supervisorctl update
supervisorctl start forge-<subject>
Report:
<output_dir>/http://localhost:8082http://localhost:8080/<project>/<subject>/forge.html1/2/3/4 to select, Enter to confirm (explore) · a/b/c (converge) · f to finalizeAfter a pick:
Plateau detection: if the user picks the same mutation type twice in a row (e.g. refine twice), suggest finalizing.
$ARGUMENTS