npx claudepluginhub parhumm/jaan-to --plugin jaan-toThis skill uses the workspace's default tool permissions.
> Analyze heatmap CSV exports and screenshots to generate prioritized UX research reports with cross-validated findings.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Analyze heatmap CSV exports and screenshots to generate prioritized UX research reports with cross-validated findings.
$JAAN_LEARN_DIR/jaan-to-ux-heatmap-analyze.learn.md - Past lessons (loaded in Pre-Execution)$JAAN_TEMPLATES_DIR/jaan-to-ux-heatmap-analyze.template.md - Report template$JAAN_CONTEXT_DIR/tech.md - Tech stack context (helpful for CSS selector resolution)${CLAUDE_PLUGIN_ROOT}/docs/extending/language-protocol.md - Language resolution protocolAnalysis Request: $ARGUMENTS
Accepted file types:
MANDATORY — Read and execute ALL steps in: ${CLAUDE_PLUGIN_ROOT}/docs/extending/pre-execution-protocol.md
Skill name: ux-heatmap-analyze
Execute: Step 0 (Init Guard) → A (Load Lessons) → B (Resolve Template) → C (Offer Template Seeding)
Read and apply language protocol: ${CLAUDE_PLUGIN_ROOT}/docs/extending/language-protocol.md
Override field for this skill: language_ux-heatmap-analyze
Parse $ARGUMENTS to identify file paths. For each CSV file, read the first 15 lines to detect its format:
Format A — Aggregated Element Data:
If lines contain metadata keys like Project name, Date range, Page views, Total clicks, or the data table has columns Rank, Button, Clicks, % of clicks:
Format B — Raw Coordinate Data:
If column headers contain timestamp, session_id, x_coordinate, y_coordinate, viewport_width:
Format Unknown: If neither pattern matches:
"Cannot auto-detect CSV format. What type of data is this? [1] Aggregated - Ranked elements with click counts (e.g., analytics tool export) [2] Raw events - Individual click events with coordinates and timestamps [3] Other - Let me describe the format"
For each file, validate:
If any required file is missing or unparseable, report the error and ask for correction.
Build and display a data summary for user confirmation:
DATA SUMMARY
════════════════════════════════════════
Format: {Aggregated | Raw Events}
Files:
CSV: {filename} ({row_count} elements, {total_interactions} interactions)
Screenshot: {filename} ({width}x{height}px)
HTML: {filename | "Not provided"}
Metadata:
Date Range: {date_range}
URL: {url_pattern}
Device: {desktop | mobile | both}
Behavior Segment: {quick backs | excessive scrolling | all traffic}
Page Views: {count}
Total Interactions: {count}
{If multiple CSV files:}
Additional Files:
CSV 2: {filename} ({details})
CSV 3: {filename} ({details})
════════════════════════════════════════
Important for Format A (aggregated data): State clearly what analysis IS and IS NOT possible:
Analysis scope: Element click distribution, CTA effectiveness, navigation patterns, desktop/mobile comparison, behavior segment analysis, HTML element mapping.
Not available with this data format: Rage click detection, scroll depth analysis, hesitation timing, session flow reconstruction (these require raw event data with timestamps and coordinates).
"What is the primary question this analysis should answer? [1] Find friction - Identify frustration points and conversion barriers [2] Optimize CTA - Evaluate button/link placement and visibility [3] Compare - Desktop vs mobile or behavior segment differences [4] Other - Let me describe my specific question"
If "Other" selected, ask: "What specific question should this analysis answer?"
If multiple CSV files provided (different devices or behavior segments), ask:
"You provided {N} CSV files. How should they be analyzed? [1] Compare - Side-by-side comparison across files [2] Separate - Independent analysis for each file [3] Combined - Merge all data into single analysis"
IMPORTANT: Read screenshot files BEFORE the CSV analysis to avoid anchoring bias.
For each screenshot, read the image file and analyze:
Prompt structure (image placed first, then analysis request):
Record each observation with:
Note on large screenshots: If image height exceeds 5000px, note that findings for below-fold sections may have reduced confidence. Very tall captures (>10000px) should be flagged as "reduced vision confidence for lower page sections."
Pareto analysis: Calculate what percentage of total clicks the top 10% of elements receive. If top 10% captures >60% of clicks, there is high click concentration.
Top elements breakdown: List top 15-20 elements by click count. For each:
Low-engagement elements: Identify elements below 0.5% of total clicks. These may indicate dead elements or poor visibility.
Element type distribution: Group all elements by inferred type and calculate aggregate clicks per type:
Behavior segment analysis (if metadata includes behavior flags):
Multi-file comparison (if multiple CSVs):
For the top 30 elements from the CSV:
Extract the CSS selector from the CSV
Search the HTML for matching elements
For each match, extract:
href, aria-label, data-*, title attributesBuild a mapping table:
ELEMENT MAPPING
════════════════════════════════════════
Rank | Clicks | % | Element Description
─────┼────────┼───────┼──────────────────────
1 | 1,444 | 9.73% | Next arrow (carousel navigation)
2 | 1,290 | 8.69% | Previous arrow (carousel navigation)
3 | 528 | 3.56% | Thumbnail image (content card)
...
════════════════════════════════════════
Dead element detection: Search HTML for interactive elements (buttons, links, inputs) that appear in the DOM but have zero or near-zero clicks in the CSV. These may indicate:
Opaque selector handling: If CSS selectors contain only generated class names (e.g., css-1a2b3c, sc-dkzDqf) with no semantic meaning:
Pass 1 — Collect Candidate Findings:
Gather all observations from Steps 4, 5, and 6 into a candidate list. For each finding, note its source:
Pass 2 — Cross-Reference and Score:
For each candidate finding, check if it is supported by multiple sources:
| Validation Status | Criteria | Confidence Range |
|---|---|---|
| Corroborated | Finding supported by 2+ sources | 0.85 — 0.95 |
| Single-source | Finding from one source only | 0.70 — 0.80 |
| Contradicted | Sources disagree | Flag for investigation |
Examples:
Discard findings below 0.70 confidence. Flag all contradictions for explicit mention in report.
Organize validated findings by severity:
Priority = Likelihood × Impact
Severity Levels:
- CRITICAL (≥0.95): High likelihood + High impact → Immediate action needed
- HIGH (≥0.90): High in one dimension → Sprint planning
- MEDIUM (≥0.85): Moderate both → Backlog consideration
- LOW (≥0.80): Low both → Monitor
Draft action summary: top 3-5 actions as bullets (action, impact, one-line evidence).
For each finding, prepare:
For each high-confidence finding (≥0.85), draft one A/B test or UX research idea:
Present the analysis plan summary:
ANALYSIS SUMMARY
════════════════════════════════════════
Data: {format} | {device} | {behavior segment}
Goal: {analysis goal}
Sources: CSV ✓ | Screenshot ✓ | HTML {✓ | ✗}
VALIDATED FINDINGS: {total}
Corroborated: {n} (high confidence)
Single-source: {n} (medium confidence)
Contradicted: {n} (flagged)
TOP FINDINGS:
1. {finding_title} — {severity} ({confidence})
2. {finding_title} — {severity} ({confidence})
3. {finding_title} — {severity} ({confidence})
REPORT SECTIONS:
✓ Action Summary (top 3-5 actions)
✓ {N} Findings & Actions (insight + action + evidence)
✓ {N} Test Ideas (A/B tests + UX research)
{✓ | ✗} Device/Segment Comparison
✓ Element Mapping Table
✓ Limitations & Method (footer)
════════════════════════════════════════
"Proceed with full report generation? [y/edit/n]"
Do NOT proceed to Phase 2 without explicit approval.
Use template from: $JAAN_TEMPLATES_DIR/jaan-to-ux-heatmap-analyze.template.md
Fill all template sections. Report must be insightful, practical, and actionable — lead with why it matters and what to do. Minimize descriptive narrative.
Embed analyzed heatmap screenshots in the report header using . Resolve paths using the asset embedding protocol below.
Before preview, verify every item:
 syntaxIf any check fails, revise the report before preview.
Show the complete report in conversation.
"Here's the report preview. Write to output? [y/n]"
If approved, set up the output structure:
source "${CLAUDE_PLUGIN_ROOT}/scripts/lib/id-generator.sh"
# Define subdomain directory
SUBDOMAIN_DIR="$JAAN_OUTPUTS_DIR/ux/heatmap"
mkdir -p "$SUBDOMAIN_DIR"
# Generate next ID
NEXT_ID=$(generate_next_id "$SUBDOMAIN_DIR")
# Create folder and file paths (slug from URL/page name)
slug="{lowercase-hyphenated-page-name-max-50-chars}"
OUTPUT_FOLDER="${SUBDOMAIN_DIR}/${NEXT_ID}-${slug}"
MAIN_FILE="${OUTPUT_FOLDER}/${NEXT_ID}-heatmap-${slug}.md"
Output Configuration
- ID: {NEXT_ID}
- Folder: $JAAN_OUTPUTS_DIR/ux/heatmap/{NEXT_ID}-{slug}/
- Main file: {NEXT_ID}-heatmap-{slug}.md
If screenshot paths were provided as input:
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/asset-embedding-reference.mdfor the asset resolution protocol (path detection, copy rules, markdown embedding).
Source ${CLAUDE_PLUGIN_ROOT}/scripts/lib/asset-handler.sh. For each screenshot path: check is_jaan_path — if inside $JAAN_*, reference in-place; if external, ask user before copying. Use resolve_asset_path for markdown-relative paths in the report.
mkdir -p "$OUTPUT_FOLDER"
cat > "$MAIN_FILE" <<'EOF'
{generated heatmap analysis with Executive Summary}
EOF
source "${CLAUDE_PLUGIN_ROOT}/scripts/lib/index-updater.sh"
add_to_index \
"$SUBDOMAIN_DIR/README.md" \
"$NEXT_ID" \
"${NEXT_ID}-${slug}" \
"{Page/URL Title}" \
"{1-2 sentence summary: heatmap analysis findings and top priority issues}"
✓ Heatmap report written to: $JAAN_OUTPUTS_DIR/ux/heatmap/{NEXT_ID}-{slug}/{NEXT_ID}-heatmap-{slug}.md ✓ Index updated: $JAAN_OUTPUTS_DIR/ux/heatmap/README.md
"Any feedback or improvements needed? [y/n]"
If yes:
"How should I handle this? [1] Fix now - Update this report [2] Learn - Save for future analyses [3] Both - Fix now AND save lesson"
/jaan-to:learn-add ux-heatmap-analyze "{feedback}"$JAAN_OUTPUTS_DIR path$JAAN_OUTPUTS_DIR/ux/heatmap/{slug}/report.md