From motioneyes
Analyzes UI motion (translation, scale, rotation, opacity, color) and changes pixel-by-pixel from frames, screenshots, or videos, producing annotated images and JSON summaries for MotionEyes visual gaps and regressions.
npx claudepluginhub edwardsanchez/motioneyes --plugin motioneyesThis skill uses the workspace's default tool permissions.
Analyze UI motion by comparing frames directly. This skill produces annotated images plus a JSON summary of motion signals (translation, scale, rotation, opacity, color) and changed regions. It complements MotionEyes traces when instrumentation is unavailable or insufficient.
Diagnose, fix, or validate SwiftUI animation and scroll bugs by instrumenting views with MotionEyes, capturing console traces via XcodeBuildMCP or CLI, and comparing motion data to expectations.
Captures screenshots and videos of running macOS app windows via osascript, screencapture, and ffmpeg for UI verification, mockups, and visual comparisons in agent workflows.
Extracts video frames with ffmpeg and reviews them via Claude's Read to verify UI flows, detect errors, stuck states, and test outcomes in videos.
Share bugs, ideas, or general feedback.
Analyze UI motion by comparing frames directly. This skill produces annotated images plus a JSON summary of motion signals (translation, scale, rotation, opacity, color) and changed regions. It complements MotionEyes traces when instrumentation is unavailable or insufficient.
motioneyes-animation-debug when you can instrument the app and need precise timing or value traces..transition and similar effects do not surface in MotionEyes logs.Use one of these inputs:
--fps.screencapture with a window id.Follow this order:
--trim to generate frames, diffs, and keyframes.grid/ + sprite/, plus diff_grid/ when needed) for coordinate-indexed reading across frames.frames/ + diff/) for verification.--all-pairs only when confidence is low or anomalies are suspected.analysis.json and summary.md.--report.--trim-threshold and --trim-relative tuning if the animation is subtle.Use this matrix to choose which artifacts to rely on.
| Goal | frames/ | grid/ | diff/ | diff_grid/ | sprite/ | Pair scope |
|---|---|---|---|---|---|---|
| Fast first pass (unknown issue) | Optional | Yes | Yes | Optional | Yes | Keyframe pairs |
| On-screen coordinate pinpointing ("where?") | Optional | Yes | Optional | Optional | Yes | Keyframe pairs |
| Pixel-change inspection ("what changed?") | Yes | Optional | Yes | Optional | Optional | Keyframe pairs |
| Timing/story summary | Optional | Yes | Optional | No | Yes | Keyframe pairs |
| Flicker / dropped-frame suspicion | Yes | Yes | Yes | Optional | Yes | All pairs |
| Regression verification across builds | Yes | Yes | Yes | Optional | Optional | All pairs |
| Visual design/polish review | Yes | No | Optional | No | Optional | Keyframe pairs |
Every conclusion in summary.md must cite evidence explicitly.
diff/ alone. Pair each diff with frame_n and frame_n+1.pair 12->13)frames, diff, optionally grid or diff_grid)0.0-1.0)grid/ or sprite/ when available.0.7, do not finalize. Escalate pair coverage first.diff indicates change but frame context is ambiguous, mark as uncertain and inspect neighboring pairs.Use this escalation order:
i-1, i, i+1).--all-pairs.Pair selection intent:
Example flow:
--frames-dir.Motion-gated capture (simulator):
python3 scripts/capture_sim_frames.py \
--sim-id <SIM_ID> \
--output-dir /path/to/frames_raw \
--fps 15 \
--frame-count 45 \
--gate-threshold 4.0 \
--gate-consecutive 1
scripts/analyze_sequence.py --source macos --window-id <id> --duration <seconds> --fps <fps>.screencapture on the specified window id.frames/.--frames-dir.--all-pairs to render every pair.--diff-grid to overlay the alphanumeric grid on diff images for faster coordinate pinpointing.--grid-theme auto|light|dark to ensure the grid is readable on dark or light backgrounds.The analyzer writes to the --output directory:
analysis.json: machine-readable metrics.summary.md: short human summary.frames/: normalized frames.grid/: frames with alphanumeric grid overlay.diff/: absolute diff images.diff_grid/: diff images with grid overlay (when --diff-grid is enabled).sprite/: keyframe sprite sheet (grid-overlaid by default, raw only when --no-grid is used).Interpretation guidance:
grid/ + sprite/) for coordinate-indexed motion interpretation.diff_grid/ when you need pairwise change maps with coordinate labels.frames/ + diff/.Primary entrypoint:
python3 scripts/analyze_sequence.py --video /path/to/capture.mp4 --fps 15 --duration 1.0 --output /path/to/report
If the capture includes idle time before/after the animation, add --trim to auto-detect the motion window.
Recommended profiles:
# Default summary + coordinate-indexed artifacts
python3 scripts/analyze_sequence.py --video /path/to/capture.mp4 --fps 15 --duration 1.0 --trim --diff-grid --output /path/to/report
# Deep investigation for flicker/regression
python3 scripts/analyze_sequence.py --video /path/to/capture.mp4 --fps 15 --duration 1.0 --trim --diff-grid --all-pairs --output /path/to/report
# Clean visual review (no overlay)
python3 scripts/analyze_sequence.py --video /path/to/capture.mp4 --fps 15 --duration 1.0 --trim --no-grid --output /path/to/report
See scripts/analyze_sequence.py --help for all flags.
Create a local venv and install dependencies:
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
Load these references when needed:
references/motion-analysis-techniques.mdreferences/report-schema.mdreferences/grid-overlay-notes.md