npx claudepluginhub parhumm/jaan-to --plugin jaan-toThis skill uses the workspace's default tool permissions.
> Synthesize UX research findings into themed insights, executive summaries, and prioritized recommendations.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Synthesize UX research findings into themed insights, executive summaries, and prioritized recommendations.
$JAAN_LEARN_DIR/jaan-to-ux-research-synthesize.learn.md - Past lessons (loaded in Pre-Execution)$JAAN_TEMPLATES_DIR/jaan-to-ux-research-synthesize.template.md - Synthesis report template$JAAN_CONTEXT_DIR/config.md - Project configuration (if applicable)${CLAUDE_PLUGIN_ROOT}/docs/extending/language-protocol.md - Language resolution protocol${CLAUDE_PLUGIN_ROOT}/docs/extending/ux-research-templates.md - Theme cards, recommendation format, executive brief template, quality checklistStudy Name & Data Sources: $ARGUMENTS
Accepted data sources:
MANDATORY — Read and execute ALL steps in: ${CLAUDE_PLUGIN_ROOT}/docs/extending/pre-execution-protocol.md
Skill name: ux-research-synthesize
Execute: Step 0 (Init Guard) → A (Load Lessons) → B (Resolve Template) → C (Offer Template Seeding)
Read and apply language protocol: ${CLAUDE_PLUGIN_ROOT}/docs/extending/language-protocol.md
Override field for this skill: language_ux-research-synthesize
Parse $ARGUMENTS to identify:
For each data source:
.txt, .md, .docx, .pdf filesBuild and display data summary.
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/ux-research-synthesize-reference.mdsection "Data Source Summary Format" for the display format.
If any file is missing or unparseable, report error and ask for correction.
Present three synthesis modes: [1] Speed, [2] Standard (recommended), [3] Cross-Study.
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/ux-research-synthesize-reference.mdsection "Synthesis Mode Descriptions" for full mode details to present to the user.
Ask: "Choose mode: [1/2/3]"
Store selection as {synthesis_mode}.
Show expected deliverables:
"Mode: {synthesis_mode} Time estimate: {time_estimate} Output: {deliverables_description}"
Ask: "What are your research questions? (1-3 max)"
If unclear, provide common templates from reference material.
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/ux-research-synthesize-reference.mdsection "Research Question Templates" for the template list.
Confirm 1-3 research questions maximum.
Critical: Every theme must tie back to these questions.
For Speed mode: Read first 200 lines of each transcript For Standard/Cross-Study: Read complete transcripts
Use Task tool to launch AI familiarization agent:
Prompt:
Read all provided transcripts. Identify:
1. Recurring topics mentioned by multiple participants
2. Strong emotional reactions (frustration, delight, confusion)
3. Contradictory statements between participants
4. Participant language patterns and terminology
Do NOT generate themes yet — only familiarize with the data landscape.
Return:
- Summary of data landscape (5-7 sentences)
- Participant count identified
- Session types observed
- Data quality notes (any transcription issues, missing context)
Human reviews AI summary:
Use Task tool for AI coding agent:
Prompt:
Generate descriptive codes for meaningful data segments from the transcripts.
Use both semantic (surface) and latent (underlying) codes.
For each code provide:
- Label (2-4 words)
- Brief definition
- Supporting quote with participant ID
- Line reference
Return codebook with 30-40 codes maximum.
Format: Table with Code | Definition | Supporting Quote | Participant ID | Line Reference
Human validates codebook:
Display AI-generated codebook in conversation.
Check for:
Ask: "Codebook Review
{display codebook}
Approve this codebook or refine? [Approve/Refine/Restart]"
If Refine: Ask "What should be changed?" and regenerate If Restart: Start Step 5 over with different approach If Approve: Continue to Step 6
Ask: "Should themes be generated: [1] Inductively - From data patterns (let themes emerge organically) [2] Deductively - From framework (e.g., UX mental models, needs, pain points, workarounds) [3] Hybrid - Start with framework + add emergent codes ← Recommended"
Use Task tool for AI theme clustering:
Prompt:
Group the approved codes into 4-6 candidate themes.
For each theme provide:
1. Descriptive name
2. Codes belonging to theme (list all relevant codes)
3. 2-3 sentence narrative explaining the pattern
4. Tensions or contradictions within theme (if any)
Be explicit about reasoning for each grouping.
Human reviews themes:
Display candidate themes in conversation.
Check for:
Rename themes from descriptive to interpretive:
Ask: "Theme Review
{display themes with codes}
Theme count: {N} {optimal: 3-8 | ⚠️ flag if outside range}
Approve themes or refine? [Approve/Split/Merge/Rename]"
Options:
For each approved theme, compile supporting evidence:
Extract quotes - Minimum 2-3 quotes from different participants
Include context:
Build traceability matrix: Theme → Codes → Quotes → Participant IDs
Track participant contribution:
Display participant coverage matrix per theme and flag imbalanced coverage (>25% from single participant).
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/ux-research-synthesize-reference.mdsections "Participant Coverage Matrix Format" and "Imbalanced Coverage Handling" for display format and remediation options.
For each theme, apply the Nielsen severity framework (0-4 scale). Ask the user to rate severity for each theme.
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/ux-research-synthesize-reference.mdsection "Nielsen Severity Framework (0-4 Scale)" for the full rating scale descriptions.
Calculate priority score using Severity x Frequency.
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/ux-research-synthesize-reference.mdsection "Priority Score Calculation" for the formula.
Apply Impact x Effort matrix. Ask the user to estimate effort for each theme, then classify into quadrants.
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/ux-research-synthesize-reference.mdsection "Impact x Effort Matrix" for effort estimation scale and quadrant definitions.
Generate draft recommendations using the action-outcome-evidence template.
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/ux-research-synthesize-reference.mdsection "Draft Recommendation Template" for the format and example.
Present analysis summary for approval:
SYNTHESIS ANALYSIS SUMMARY
════════════════════════════════════════
Mode: {Speed | Standard | Cross-Study}
Study: {study_name}
Research Questions:
1. {question_1}
2. {question_2}
3. {question_3}
Data Sources: {N} files
Participants: {n} total
Transcripts analyzed: {n}
THEMES IDENTIFIED: {N} themes {optimal: 3-8 | ⚠️ flag if outside}
Priority 1 (High Severity × High Frequency):
- {theme_name} — Severity {0-4}, {frequency}%, {n} participants
Quick Win / Big Bet / Fill-In / Money Pit: {quadrant}
Priority 2:
- {theme_name} — Severity {0-4}, {frequency}%, {n} participants
{quadrant}
Priority 3:
- {theme_name} — Severity {0-4}, {frequency}%, {n} participants
{quadrant}
QUALITY CHECKS:
✓ Theme count: {N} (optimal: 3-8)
✓ Participant coverage: {Balanced | ⚠️ Imbalanced}
✓ Evidence traceability: {Complete | ⚠️ Gaps in Theme X}
✓ Research questions addressed: {N}/{total}
{⚠️ Any warnings or issues}
REPORT SECTIONS TO GENERATE:
✓ Executive Summary (1 page max)
✓ {N} Themed Findings with evidence
✓ Prioritized Recommendations ({N} total)
✓ Methodology Note with limitations
{✓ Appendix (if Standard+ mode)}
════════════════════════════════════════
"Proceed with synthesis report generation? [y/edit/n]"
Do NOT proceed to Phase 2 without explicit approval.
If edit: Ask "What should be changed?" and return to appropriate step If n: Stop and ask for next steps
Source ID generator utility:
source "${CLAUDE_PLUGIN_ROOT}/scripts/lib/id-generator.sh"
Generate sequential ID and output paths:
# Define subdomain directory
SUBDOMAIN_DIR="$JAAN_OUTPUTS_DIR/ux/research"
mkdir -p "$SUBDOMAIN_DIR"
# Generate next ID
NEXT_ID=$(generate_next_id "$SUBDOMAIN_DIR")
# Generate slug from study name
slug=$(echo "{study_name}" | tr '[:upper:]' '[:lower:]' | tr ' ' '-' | sed 's/[^a-z0-9-]//g' | cut -c1-50)
# Create folder and file paths
OUTPUT_FOLDER="${SUBDOMAIN_DIR}/${NEXT_ID}-${slug}"
MAIN_FILE="${OUTPUT_FOLDER}/${NEXT_ID}-synthesis-${slug}.md"
EXEC_FILE="${OUTPUT_FOLDER}/${NEXT_ID}-exec-brief-${slug}.md"
Preview output configuration:
"Output Configuration
- ID: {NEXT_ID}
- Folder: $JAAN_OUTPUTS_DIR/ux/research/{NEXT_ID}-{slug}/
- Main file: {NEXT_ID}-synthesis-{slug}.md
- Exec brief: {NEXT_ID}-exec-brief-{slug}.md"
Use template from: $JAAN_TEMPLATES_DIR/jaan-to-ux-research-synthesize.template.md
Fill all template sections:
Structure:
For each theme (ordered by Priority Score descending):
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/ux-research-templates.mdsection "Theme Card Structure" for the full theme card template and priority badge definitions.
For each theme with actionable recommendation:
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/ux-research-templates.mdsection "Recommendation Format (Problem-Solution)" for the full INSIGHT/SO WHAT/NOW WHAT template.
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/ux-research-synthesize-reference.mdsections "Methodology Note Structure" and "Appendix Sections" for detailed content requirements.
Auto-generate 1-page standalone summary from main report.
Extract from main report:
Constraints: Max 1 page (~300-400 words), no methodology/raw data/jargon, standalone.
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/ux-research-templates.mdsection "Executive Brief Format" for the full markdown template.
Apply the full quality checklist before preview. If any check fails, revise report before preview.
Reference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/ux-research-templates.mdsection "Quality Checklist (Pre-Write Validation)" for the complete checklist covering Executive Summary, Themes, Recommendations, Evidence & Traceability, Methodology, and Cross-Study checks.
Show both reports in conversation:
"Preview: Synthesis Outputs
{display full main report}
{display executive brief}
Write these outputs? [y/n]"
If n: Ask "What should be changed?" and return to Step 10/11
If approved, write files:
mkdir -p "$OUTPUT_FOLDER"
cat > "$MAIN_FILE" <<'EOF'
{generated synthesis report with Executive Summary, Key Findings, Recommendations, Methodology, Appendix}
EOF
cat > "$EXEC_FILE" <<'EOF'
{generated 1-page executive brief}
EOF
source "${CLAUDE_PLUGIN_ROOT}/scripts/lib/index-updater.sh"
# Extract 1-2 sentence summary from Executive Summary section
EXEC_SUMMARY="{extract first 1-2 sentences from Executive Summary of main report}"
add_to_index \
"$SUBDOMAIN_DIR/README.md" \
"$NEXT_ID" \
"${NEXT_ID}-${slug}" \
"{Study Name}" \
"$EXEC_SUMMARY"
"✓ Synthesis report written to: $JAAN_OUTPUTS_DIR/ux/research/{NEXT_ID}-{slug}/{NEXT_ID}-synthesis-{slug}.md ✓ Executive brief written to: $JAAN_OUTPUTS_DIR/ux/research/{NEXT_ID}-{slug}/{NEXT_ID}-exec-brief-{slug}.md ✓ Index updated: $JAAN_OUTPUTS_DIR/ux/research/README.md"
"Any feedback or improvements needed? [y/n]"
If yes:
"How should I handle this? [1] Fix now - Update this synthesis [2] Learn - Save for future syntheses [3] Both - Fix now AND save lesson"
Options:
/jaan-to:learn-add ux-research-synthesize "{feedback}"$JAAN_OUTPUTS_DIR pathReference: See
${CLAUDE_PLUGIN_ROOT}/docs/extending/ux-research-synthesize-reference.mdsection "Definition of Done Checklist" for the full 15-point checklist.