From memorex
Process video files with memorex to extract transcripts and keyframes for analysis
npx claudepluginhub jayzes/memorex --plugin memorexThis skill uses the workspace's default tool permissions.
Process video files with memorex to extract transcripts and keyframes for analysis.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Process video files with memorex to extract transcripts and keyframes for analysis.
FFmpeg - Required for video/audio processing
# macOS
brew install ffmpeg
# Ubuntu/Debian
sudo apt install ffmpeg
# Verify installation
ffmpeg -version
whisper.cpp - Required for transcription
# macOS
brew install whisper-cpp
# Or build from source
git clone https://github.com/ggerganov/whisper.cpp
cd whisper.cpp
make
sudo cp main /usr/local/bin/whisper-cli
Note: The Whisper model (~148MB) will be automatically downloaded on first use.
From source (requires Go 1.21+):
# Clone and build
git clone https://github.com/jayzes/memorex
cd memorex
go build -o memorex ./cmd/memorex
# Move to PATH
sudo mv memorex /usr/local/bin/
# Or install directly
go install github.com/jayzes/memorex/cmd/memorex@latest
Verify installation:
memorex --help
IMPORTANT: Run this entire workflow as a subagent using the Agent tool with subagent_type: "general-purpose". This keeps the potentially large memorex output (transcripts, keyframe images) out of the main conversation context.
Launch a single general-purpose Agent with a prompt like the following (fill in the bracketed values):
Process and analyze a video file for the user.
Video path: [video-path]
User's question/goal: [what the user wants to know about the video]
Follow these steps:
1. Confirm the video file exists (ls the path, note file size and format).
2. Run memorex:
```bash
mkdir -p /tmp/memorex
memorex -o /tmp/memorex/[video-basename]_analysis.md [video-path]
Options to consider:
-t 0.9 for fewer keyframes (less similar frames filtered)-t 0.7 for more keyframes (more sensitive to changes)--no-transcript if only visual analysis needed--no-frames for audio-only analysisRead the generated markdown file at /tmp/memorex/[video-basename]_analysis.md using the Read tool.
Review the metadata (duration, frame count, keyframe count, token estimate).
Read relevant keyframe images from the frames directory using the Read tool (Claude can see images). Cross-reference transcript timestamps with keyframe timestamps.
Based on the user's goal, provide a thorough analysis covering:
Return a concise but complete analysis. Include the output file path so the user can reference it later.
### What to do in the main conversation
1. Confirm the video file path with the user if ambiguous
2. Launch the Agent subagent with the prompt above
3. Relay the agent's analysis back to the user in a concise summary
4. If the user has follow-up questions, launch another Agent to re-read the memorex output files and answer specifically
## Output Format
Memorex generates a structured markdown file with this format:
```markdown
# Video Analysis: example.mp4
## Metadata
- Duration: 2m 34s
- Original frames: 154
- Keyframes extracted: 12
- Token estimate: ~15,600
## Transcript
[0:00] First spoken words...
[0:15] More dialogue here...
[1:30] Later in the video...
## Keyframes
### Frame 1 (0:00)

### Frame 15 (0:15)

Interpreting the output:
[M:SS]) indicate when words were spokenFor large videos (>30 keyframes), suggest:
-t 0.9) to extract fewer framesIf the user has follow-up questions about a previously analyzed video, launch another general-purpose Agent subagent with a prompt that tells it to re-read the memorex output files at /tmp/memorex/ and answer the specific question. This avoids loading the full output into the main context.
# Standard analysis
memorex video.mp4
# High-change video (presentations, demos)
memorex -t 0.9 demo.mov
# Static video (talking head, minimal visual changes)
memorex -t 0.7 interview.mp4
# Audio-only (podcast, voice memo)
memorex --no-frames podcast.mp3
# Custom output location
memorex -o ~/analysis/meeting.md recording.mp4
# Lower quality frames (smaller files)
memorex -q 20 -s 0.3 large_video.mp4
memorex not found:
which memorexgo install github.com/jayzes/memorex/cmd/memorex@latestFFmpeg errors:
which ffmpegbrew upgrade ffmpeg (macOS)Transcription fails:
which whisper-cli~/.cache/whisper/ggml-base.bin--no-transcript to test video extraction separatelyMemory issues with large videos:
-s 0.25-t 0.95