Import UI screenshots and generate uxscii components automatically using vision analysis. Use when user wants to import, convert, or generate .uxm components from screenshots or images.
Import UI screenshots and generate uxscii components automatically using vision analysis. Use when user wants to import, convert, or generate .uxm components from screenshots or images.
/plugin marketplace add trabian/fluxwing-skills/plugin install fluxwing-skills@fluxwing-marketplaceThis skill is limited to using the following tools:
docs/screenshot-data-merging.mddocs/screenshot-import-ascii.mddocs/screenshot-import-examples.mddocs/screenshot-import-helpers.mddocs/screenshot-screen-generation.mddocs/screenshot-validation-functions.mdImport UI screenshots and convert them to the uxscii standard by orchestrating specialized vision agents.
READ from (bundled templates - reference only):
{SKILL_ROOT}/../uxscii-component-creator/templates/ - 11 component templates (for reference){SKILL_ROOT}/docs/ - Screenshot processing documentationWRITE to (project workspace):
./fluxwing/components/ - Extracted components (.uxm + .md)./fluxwing/screens/ - Screen composition (.uxm + .md + .rendered.md)NEVER write to skill directories - they are read-only!
Import a screenshot of a UI design and automatically generate uxscii components and screens by orchestrating specialized agents:
⚠️ ORCHESTRATION RULES:
YOU CAN (as orchestrator):
YOU CANNOT (worker mode - forbidden):
For screen composition: ALWAYS delegate to fluxwing-screen-scaffolder skill. It will spawn composer agents that create all screen files (.uxm, .md, .rendered.md).
Ask the user for the screenshot path if not provided:
// Example
const screenshotPath = "/path/to/screenshot.png";
CRITICAL: Spawn the screenshot-vision-coordinator agent to orchestrate parallel vision analysis.
This agent will:
Task({
subagent_type: "general-purpose",
description: "Analyze screenshot with vision analysis",
prompt: `You are a UI screenshot analyzer extracting component structure for uxscii.
Screenshot path: ${screenshotPath}
Your task:
1. Read the screenshot image file
2. Analyze the UI layout structure (vertical, horizontal, grid, sidebar+main)
3. Detect all UI components (buttons, inputs, navigation, cards, etc.)
4. Extract visual properties (colors, spacing, borders, typography)
5. Identify component hierarchy (atomic vs composite)
6. Merge all findings into a unified data structure
7. Return valid JSON output
CRITICAL detection requirements:
- Do NOT miss navigation elements (check all edges - top, left, right, bottom)
- Do NOT miss small elements (icons, badges, close buttons, status indicators)
- Identify composite components (forms, cards with multiple elements)
- Note spatial relationships between components
Expected output format (valid JSON only, no markdown):
{
"success": true,
"screen": {
"id": "screen-name",
"type": "dashboard|login|profile|settings",
"name": "Screen Name",
"description": "What this screen does",
"layout": "vertical|horizontal|grid|sidebar-main"
},
"components": [
{
"id": "component-id",
"type": "button|input|navigation|etc",
"name": "Component Name",
"description": "What it does",
"visualProperties": {...},
"isComposite": false
}
],
"composition": {
"atomicComponents": ["id1", "id2"],
"compositeComponents": ["id3"],
"screenComponents": ["screen-id"]
}
}
Use your vision capabilities to analyze the screenshot carefully.`
})
Wait for the vision coordinator to complete and return results.
Check the returned data structure:
const visionData = visionCoordinatorResult;
// Required fields
if (!visionData.success) {
throw new Error(`Vision analysis failed: ${visionData.error}`);
}
if (!visionData.components || visionData.components.length === 0) {
throw new Error("No components detected in screenshot");
}
// Navigation check (CRITICAL)
const hasNavigation = visionData.components.some(c =>
c.type === 'navigation' || c.id.includes('nav') || c.id.includes('header')
);
if (visionData.screen.type === 'dashboard' && !hasNavigation) {
console.warn("⚠️ Dashboard detected but no navigation found - verify all nav elements were detected");
}
CRITICAL: YOU MUST spawn ALL component generator agents in a SINGLE message with multiple Task tool calls. This is the ONLY way to achieve true parallel execution.
DO THIS: Send ONE message containing ALL Task calls for all components DON'T DO THIS: Send separate messages for each component (this runs them sequentially)
For each atomic component, create a Task call in the SAME message:
Task({
subagent_type: "general-purpose",
description: "Generate email-input component",
prompt: "You are a uxscii component generator. Generate component files from vision data.
Component data: {id: 'email-input', type: 'input', visualProperties: {...}}
Your task:
1. Load schema from {SKILL_ROOT}/../uxscii-component-creator/schemas/uxm-component.schema.json
2. Load docs from {SKILL_ROOT}/docs/screenshot-import-helpers.md
3. Generate .uxm file (valid JSON with default state only)
4. Generate .md file (ASCII template matching visual properties)
5. Save to ./fluxwing/components/
6. Return success with file paths
Follow uxscii standard strictly."
})
Task({
subagent_type: "general-purpose",
description: "Generate password-input component",
prompt: "You are a uxscii component generator. Generate component files from vision data.
Component data: {id: 'password-input', type: 'input', visualProperties: {...}}
Your task:
1. Load schema from {SKILL_ROOT}/../uxscii-component-creator/schemas/uxm-component.schema.json
2. Load docs from {SKILL_ROOT}/docs/screenshot-import-helpers.md
3. Generate .uxm file (valid JSON with default state only)
4. Generate .md file (ASCII template matching visual properties)
5. Save to ./fluxwing/components/
6. Return success with file paths
Follow uxscii standard strictly."
})
Task({
subagent_type: "general-purpose",
description: "Generate submit-button component",
prompt: "You are a uxscii component generator. Generate component files from vision data.
Component data: {id: 'submit-button', type: 'button', visualProperties: {...}}
Your task:
1. Load schema from {SKILL_ROOT}/../uxscii-component-creator/schemas/uxm-component.schema.json
2. Load docs from {SKILL_ROOT}/docs/screenshot-import-helpers.md
3. Generate .uxm file (valid JSON with default state only)
4. Generate .md file (ASCII template matching visual properties)
5. Save to ./fluxwing/components/
6. Return success with file paths
Follow uxscii standard strictly."
})
... repeat for ALL atomic components in the SAME message ...
... then for composite components in the SAME message:
Task({
subagent_type: "general-purpose",
description: "Generate login-form composite",
prompt: "You are a uxscii component generator. Generate composite component from vision data.
Component data: {id: 'login-form', type: 'form', components: [...], visualProperties: {...}}
IMPORTANT: Include component references in props.components array.
Your task:
1. Load schema from {SKILL_ROOT}/../uxscii-component-creator/schemas/uxm-component.schema.json
2. Generate .uxm with components array in props
3. Generate .md with {{component:id}} references
4. Save to ./fluxwing/components/
5. Return success
Follow uxscii standard strictly."
})
Remember: ALL Task calls must be in a SINGLE message for parallel execution!
CRITICAL: YOU ARE AN ORCHESTRATOR - delegate screen creation to the screen-scaffolder skill!
⚠️ DO NOT create screen files yourself using Write/Edit tools!
After all components are created, use the fluxwing-screen-scaffolder to compose screens:
Single Screen Import:
// Invoke screen-scaffolder skill
Skill({ command: "fluxwing-skills:fluxwing-screen-scaffolder" })
// The scaffolder will:
// 1. See all components already exist (you just created them)
// 2. Skip component creation (Step 3)
// 3. Spawn ONE composer agent (Step 4)
// 4. Composer creates .uxm + .md + .rendered.md
Multiple Screenshots Import (N > 1):
// After analyzing ALL screenshots and creating ALL components
// Invoke screen-scaffolder skill ONCE
Skill({ command: "fluxwing-skills:fluxwing-screen-scaffolder" })
// Tell it about the screens:
// "I've imported N screenshots and created X components.
// Please compose these N screens: [list screen names and component lists]"
// The scaffolder will:
// 1. Detect multi-screen scenario (N > 1)
// 2. Confirm with user
// 3. Skip component creation (all exist)
// 4. Spawn N composer agents in parallel (one per screen)
// 5. Each composer creates 3 files (.uxm, .md, .rendered.md)
// Result: 3N files created by scaffolder agents
What you provide to scaffolder:
Create comprehensive summary:
# Screenshot Import Complete ✓
## Screenshot Analysis
- File: ${screenshotPath}
- Screen type: ${screenData.type}
- Layout: ${screenData.layout}
## Components Generated
### Atomic Components (${atomicCount})
${atomicComponents.map(c => `✓ ${c.id} (${c.type})`).join('\n')}
### Composite Components (${compositeCount})
${compositeComponents.map(c => `✓ ${c.id} (${c.type})`).join('\n')}
### Screen
✓ ${screenId}
## Files Created
**Components** (./fluxwing/components/):
- ${totalComponentFiles} files (.uxm + .md)
**Screen** (./fluxwing/screens/):
- ${screenId}.uxm
- ${screenId}.md
- ${screenId}.rendered.md
**Total: ${totalFiles} files created**
## Performance
- Vision analysis: Parallel (3 agents) ⚡
- Component generation: Parallel (${atomicCount + compositeCount} agents) ⚡
- Total time: ~${estimatedTime}s
## Next Steps
1. Review screen: `cat ./fluxwing/screens/${screenId}.rendered.md`
2. Add interaction states to components
3. Customize components as needed
4. View all components
This skill orchestrates 5 specialized vision agents:
User: Import this screenshot at /Users/me/Desktop/login.png
Skill: I'll import the UI screenshot and generate uxscii components!
[Validates screenshot exists]
Step 1: Analyzing screenshot with vision agents...
[Spawns vision coordinator]
✓ Vision analysis complete:
- Detected 5 components
- Screen type: login
- Layout: vertical-center
Step 2: Generating component files in parallel...
[Spawns 5 component generator agents in parallel]
✓ All components generated!
# Screenshot Import Complete ✓
## Components Generated
✓ email-input (input)
✓ password-input (input)
✓ submit-button (button)
✓ cancel-link (link)
✓ login-form (form)
## Files Created
- 10 component files
- 3 screen files
Total: 13 files
Performance: ~45s (5 agents in parallel) ⚡
Next steps:
- Review: cat ./fluxwing/screens/login-screen.rendered.md
- Add states to make components interactive
Ensure imported components include:
If vision analysis fails:
✗ Vision analysis failed: [error message]
Please check:
- Screenshot file exists and is readable
- File format is supported (PNG, JPG, JPEG, WebP, GIF)
- Screenshot contains visible UI elements
If component generation fails:
⚠️ Partial success: 3 of 5 components generated
Successful:
✓ email-input
✓ password-input
✓ submit-button
Failed:
✗ cancel-link: [error]
✗ login-form: [error]
You can retry failed components or create them manually.
If no components detected:
✗ No components detected in screenshot.
This could mean:
- Screenshot is blank or unclear
- UI elements are too small to detect
- Screenshot is not a UI design
Please try a different screenshot or create components manually.
See {SKILL_ROOT}/docs/ for detailed documentation on:
You're helping users rapidly convert visual designs into uxscii components!
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.