From gr
Generates single-file interactive HTML visualizations (concept maps, evidence networks, knowledge graphs) from Gemini analysis results after /gr:video, /gr:research, /gr:analyze commands.
npx claudepluginhub galbaz1/video-research-mcpThis skill uses the workspace's default tool permissions.
Generate a single-file interactive HTML visualization after every `/gr:*` analysis, then capture a Playwright screenshot. The agent decides enrichment depth autonomously but respects user steering ("skip visualization", "deeper on X").
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Performs token-optimized structural code search using tree-sitter AST parsing to discover symbols, outline files, and unfold code without reading full files.
Generate a single-file interactive HTML visualization after every /gr:* analysis, then capture a Playwright screenshot. The agent decides enrichment depth autonomously but respects user steering ("skip visualization", "deeper on X").
| Source Command | Template | Visualization Type |
|---|---|---|
/gr:video, /gr:video-chat | video-concept-map | Concept map with knowledge states |
/gr:research | research-evidence-net | Evidence network with tier filtering |
/gr:analyze | content-knowledge-graph | Knowledge graph with entity types |
Read the appropriate template from skills/gemini-visualize/templates/ before generating.
Every generated visualization MUST be a single self-contained HTML file with:
#0a0a0f, nodes/text in light colors, high-contrast edges<canvas> for large graphs, <svg> for smaller onesstate = {...} object at the top of <script>The agent autonomously decides:
unknown unless the user has indicated familiarityEvery visualization includes a "Generate Prompt" button that:
This lets users cycle knowledge states, then generate a targeted prompt to paste back into Claude.
After generating and saving the HTML file:
lsof -ti:18923 | xargs kill -9 2>/dev/null; python3 -m http.server 18923 --directory <artifact-dir> &
gr/video/<slug>/)mcp__playwright__browser_navigate to http://localhost:18923/<viz-filename>
concept-map.html for video/video-chatevidence-net.html for researchknowledge-graph.html for analyzemcp__playwright__browser_wait_for with 2-second timeout for canvas/SVG rendermcp__playwright__browser_take_screenshot — save raw bytes<artifact-dir>/screenshot.pngmcp__playwright__browser_closeIf Playwright fails (not installed, browser error), skip screenshot gracefully — the HTML visualization is the primary artifact. Log the failure but don't block the workflow.
unknown)All artifacts for one analysis live together:
gr/<category>/<slug>/
├── analysis.md # Progressive markdown (timestamped entries)
├── concept-map.html # Interactive visualization (or evidence-net.html, knowledge-graph.html)
└── screenshot.png # Playwright capture of the visualization