By galbaz1
Analyze YouTube videos and web content using Gemini AI for structured insights, timestamps, and comments; perform deep evidence-tiered research on topics or documents; ingest results into Weaviate knowledge base; generate interactive visualizations; and produce explainer videos via automated pipelines with scripting, narration, and rendering.
npx claudepluginhub galbaz1/video-research-mcpGet workflow advice — which /gr command best fits your task
Analyze any content — URL, file, or pasted text
Diagnose /gr plugin setup, MCP wiring, and API connectivity
Check status of explainer video projects
Bridge workflow — analyze content with Gemini research tools, then synthesize an explainer video
Full explainer video workflow — setup, inject content, generate pipeline, review, render
First-time setup guide — verify config, discover commands, and run your first tool
Manually add knowledge to the Weaviate store
View and change Gemini model preset
Search and browse past research, video notes, and analyses
Launch Gemini Deep Research Agent with interview-driven brief
Deep document research with evidence tiers and cross-referencing
Deep research on any topic with evidence-tier labeling
Web search via Gemini grounding
Query, debug, and evaluate MLflow traces from Gemini tool calls
Multi-turn video Q&A session
Analyze a video (YouTube URL, local file, or directory)
Fetch YouTube video comments and analyze them via Gemini Flash for sentiment and key opinions (runs in background)
Bridge agent that combines Gemini research analysis with video synthesis. Analyzes content with video-research tools, then creates explainer videos. Use when converting research, videos, or articles into explainer content.
Expert workflow advisor for the /gr plugin. Recommends the optimal command and workflow for research, video analysis, content extraction, or knowledge management tasks. Checks prior work first.
Multi-phase research specialist that chains Gemini research tools for comprehensive topic analysis. Use when you need thorough investigation with evidence tiers, source verification, and orchestrated research workflows.
Video analysis specialist that extracts comprehensive insights from YouTube videos. Use for detailed breakdowns, command extraction, workflow analysis, and iterative video Q&A sessions.
Full pipeline orchestrator for explainer videos. Creates projects, runs pipeline steps, handles quality iteration, and manages renders. Use when you need to produce a complete explainer video.
Generate interactive HTML visualization from analysis data and capture screenshot (runs in background after main analysis completes)
Generates interactive HTML visualizations (concept maps, evidence networks, knowledge graphs) from Gemini analysis results. Triggers automatically after /gr:video, /gr:research, /gr:analyze.
Recommends the optimal /gr command when the user asks about Gemini-powered research, YouTube video analysis, web content extraction, or Weaviate knowledge queries. Activates only when the request matches /gr plugin capabilities and no specific /gr command was already chosen — not for code editing, debugging, testing, git operations, or general questions.
Enhances image generation prompts with Subject-Context-Style structure, style anchors, character consistency, mcp-image workflows. Not for video generation, TTS, FFmpeg, audio, or design-to-code.
Use when working with MLflow traces: debugging via MCP tools, analyzing performance, logging feedback, writing custom scorers/evaluations, or cleaning up trace data
Builds precise research briefs through adversarial user interviews
Produces voiceover audio via ElevenLabs TTS API. Activates for TTS generation, voice tuning, audio ducking, or multilingual narration — not for voice AI agents, transcription, or music.
Teaches Claude how to use the 15 video explainer tools to create explainer videos from research content. Activates when working with video synthesis, explainer creation, or the video-explainer MCP server.
Generate AI video with Veo or Sora. Triggers on text-to-video, image-to-video, video extension, style-consistent generation. Not for video analysis, research, or FFmpeg editing.
Orchestrate multi-clip AI video projects — style anchors, chaining patterns, frame-level QA, montage assembly. Not for video analysis, research, provider settings, or FFmpeg encoding.
Teaches Claude how to effectively use the 28 video-research-mcp tools. Activates when working with video analysis, deep research, content extraction, web search, or knowledge store via the video-research MCP server.
Interactive onboarding for the Weaviate knowledge store. Guides users through choosing a deployment type (Cloud, Local Docker, or Custom), setting environment variables, and verifying the connection. Activates when users want to set up or configure Weaviate for persistent knowledge storage.
FFmpeg video/audio processing — conversion, scaling, compression, trimming, concatenation, AI post-processing. Not for audio ducking/voice mixing (tts-production) or Remotion rendering.
Comprehensive UI/UX design plugin for mobile (iOS, Android, React Native) and web applications with design systems, accessibility, and modern patterns
Uses power tools
Uses Bash, Write, or Edit tools
Standalone image generation plugin using Nano Banana MCP server. Generates and edits images, icons, diagrams, patterns, and visual assets via Gemini image models. No Gemini CLI dependency required.
Persistent memory system for Claude Code - seamlessly preserve context across sessions
Memory compression system for Claude Code - persist context across sessions