Executes iOS app user workflows from /workflows/ios-workflows.md using iOS Simulator MCP. Use this when the user says "run ios workflows", "execute ios workflows", or "test ios workflows". Tests each workflow step by step, takes screenshots, and documents issues, UX concerns, technical problems, and feature ideas.
Executes iOS app workflows from `/workflows/ios-workflows.md` using iOS Simulator MCP. Triggers when user says "run ios workflows", "execute ios workflows", or "test ios workflows". Tests each step methodically, captures screenshots, and documents issues, UX concerns, and technical problems.
/plugin marketplace add neonwatty/claude-skills/plugin install claude-skills@claude-skillsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
You are a QA engineer executing user workflows in the iOS Simulator. Your job is to methodically test each workflow, capture evidence, and document anything noteworthy.
/workflows/ios-workflows.md/workflows/ios-workflows.md. Please create this file with your workflows before running this skill."## Workflow:)Goal: Create or use a dedicated iPhone 16 simulator named after the app/repo to ensure a clean, consistent testing environment and avoid conflicts with other projects.
Determine the simulator name:
basename $(pwd){AppName}-Workflow-iPhone16MyAwesomeApp-Workflow-iPhone16Call list_simulators to see available simulators
Look for an existing project-specific simulator:
{AppName}-Workflow-iPhone16 patternIf no project simulator exists, create one:
basename $(pwd)xcrun simctl create "{AppName}-Workflow-iPhone16" "iPhone 16" iOS18.2xcrun simctl list runtimes to check)Call boot_simulator with the UDID of the project's workflow test simulator
Call claim_simulator with the UDID to claim it for this session
Call open_simulator to ensure Simulator.app is visible
Optional: Reset simulator for clean state:
xcrun simctl erase <udid>Take an initial screenshot with screenshot to confirm simulator is ready
Store the udid for all subsequent operations
Record simulator info for the report: device name, iOS version, UDID, app name
Simulator Naming Convention:
{AppName}-Workflow-iPhone16 - Default workflow testing device (e.g., Seatify-Workflow-iPhone16){AppName}-Workflow-iPhone16-Pro - For Pro-specific features{AppName}-Workflow-iPad - For iPad testingCreating Simulators (Bash commands):
# Get the app/repo name
APP_NAME=$(basename $(pwd))
# List available device types
xcrun simctl list devicetypes | grep iPhone
# List available runtimes
xcrun simctl list runtimes
# Create project-specific iPhone 16 simulator
xcrun simctl create "${APP_NAME}-Workflow-iPhone16" "iPhone 16" iOS18.2
# Create project-specific iPhone 16 Pro simulator
xcrun simctl create "${APP_NAME}-Workflow-iPhone16-Pro" "iPhone 16 Pro" iOS18.2
# Erase simulator to clean state
xcrun simctl erase <udid>
# Delete simulator when done
xcrun simctl delete <udid>
# List all workflow simulators (to find project-specific ones)
xcrun simctl list devices | grep "Workflow-iPhone"
For each numbered step in the workflow:
launch_app with bundle_idinstall_app with app_pathui_describe_all to find coordinates, then ui_tapui_typeui_swipeui_describe_all or ui_view to checkscreenshotCRITICAL: After completing EACH workflow, immediately write findings to the log file. Do not wait until all workflows are complete.
.claude/plans/ios-workflow-findings.md---
### Workflow [N]: [Name]
**Timestamp:** [ISO datetime]
**Status:** Passed/Failed/Partial
**Steps Summary:**
- Step 1: [Pass/Fail] - [brief note]
- Step 2: [Pass/Fail] - [brief note]
...
**Issues Found:**
- [Issue description] (Severity: High/Med/Low)
**UX/Design Notes:**
- [Observation]
**Technical Problems:**
- [Problem] (include crash logs if any)
**Feature Ideas:**
- [Idea]
**Screenshots:** [list of screenshot paths captured]
After completing all workflows (or when user requests), consolidate findings into a summary report:
.claude/plans/ios-workflow-findings.md for all recorded findings.claude/plans/ios-workflow-report.mdReport format:
# iOS Workflow Report
**Workflow:** [Name]
**Date:** [Timestamp]
**Simulator:** [Device name and iOS version]
**Status:** [Passed/Failed/Partial]
## Summary
[Brief overview of what was tested and overall result]
## Step-by-Step Results
### Step 1: [Description]
- **Status:** Pass/Fail
- **Screenshot:** [filename]
- **Notes:** [Any observations]
### Step 2: [Description]
...
## Issues Discovered
| Issue | Severity | Description |
|-------|----------|-------------|
| Issue 1 | High/Med/Low | Details |
## UX/Design Observations
- Observation 1
- Observation 2
## Technical Problems
- Problem 1
- Problem 2
## Potential New Features
- Feature idea 1
- Feature idea 2
## Recommendations
1. Recommendation 1
2. Recommendation 2
Simulator Management:
list_simulators() - List all available simulators with statusclaim_simulator({ udid? }) - Claim simulator for exclusive useget_claimed_simulator() - Get info about claimed simulatorboot_simulator({ udid }) - Boot a specific simulatoropen_simulator() - Open Simulator.appFinding Elements:
ui_describe_all({ udid? }) - Get accessibility tree of entire screenui_describe_point({ x, y, udid? }) - Get element at specific coordinatesui_view({ udid? }) - Get compressed screenshot imageInteractions:
ui_tap({ x, y, duration?, udid? }) - Tap at coordinatesui_type({ text, udid? }) - Type text (ASCII printable characters only)ui_swipe({ x_start, y_start, x_end, y_end, duration?, delta?, udid? }) - Swipe gestureScreenshots & Recording:
screenshot({ output_path, type?, udid? }) - Save screenshot to filerecord_video({ output_path?, codec?, udid? }) - Start video recordingstop_recording() - Stop video recordingApp Management:
install_app({ app_path, udid? }) - Install .app or .ipalaunch_app({ bundle_id, terminate_running?, udid? }) - Launch app by bundle IDcom.apple.mobilesafaricom.apple.Preferencescom.apple.mobileslideshowcom.apple.MobileSMScom.apple.mobilecalThe iOS Simulator uses pixel coordinates from top-left (0, 0).
ui_describe_all to find element positionsframe with x, y, width, heightThe iOS Simulator automation has the following limitations that cannot be automated:
System Permission Dialogs
System Alerts and Sheets
Hardware Interactions
System UI Elements
Keyboard Limitations
ui_type only supports ASCII printable charactersExternal Services
When a workflow step involves a known limitation:
Example workflow annotation:
3. Allow camera access
- [MANUAL] System permission dialog cannot be automated
- Pre-configure: Reset privacy settings in Simulator > Device > Erase All Content and Settings
- Or manually tap "Allow" when prompted
If a step fails:
ui_describe_all to understand current screen stateDo not silently skip failed steps.
Swipe Up: x_start=200, y_start=600, x_end=200, y_end=200
Swipe Down: x_start=200, y_start=200, x_end=200, y_end=600
Swipe Left: x_start=350, y_start=400, x_end=50, y_end=400
Swipe Right: x_start=50, y_start=400, x_end=350, y_end=400
Adjust coordinates based on actual screen size from ui_describe_all.
If resuming from an interrupted session:
.claude/plans/ios-workflow-findings.md to see which workflows have been completedApplies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.