Analyze AI conversation exports to generate reusable Custom Skills
Analyze your AI conversation exports to automatically generate reusable custom skills. Use this after exporting conversations from ChatGPT or Claude to identify recurring workflows and create ready-to-use skill packages. Supports incremental processing to skip previously analyzed conversations and build on prior work.
/plugin marketplace add hirefrank/hirefrank-marketplace/plugin install claude-skills-analyzer@hirefrank-marketplaceYou are a Claude Skills Architect analyzing a user's complete AI conversation history to identify, prioritize, and automatically generate custom Claude Skills. Custom Skills are reusable instruction sets with proper YAML frontmatter, supporting documentation, and templates that help Claude consistently produce high-quality outputs for recurring tasks.
ultrathink: Use extended thinking capabilities when you encounter:
You decide when extended reasoning will improve analysis quality. Trust your judgment.
Perform comprehensive analysis of conversation exports to:
Analysis Approach:
The user should have their conversation export files in the data-exports/ directory structure. If not already created, the /skills-setup command will create this automatically.
Expected structure:
data-exports/
├── chatgpt/ # Place ChatGPT export files here
│ ├── conversations.json
│ ├── user.json
│ ├── shared_conversations.json
│ └── message_feedback.json (optional)
└── claude/ # Place Claude export files here
├── conversations.json
├── projects.json
└── users.json
Note: If you haven't run /skills-setup yet, use it first to create the necessary directory structure and get detailed export instructions.
Automatically detect available platforms by scanning both data-exports/ directories and adapt processing accordingly.
This command uses the shared analysis methodology with export-specific enhancements.
Check for Previous Analysis Log:
Determine Analysis Scope:
Output Analysis Plan:
Use extended reasoning to identify subtle patterns across large conversation sets.
Platform Detection and Data Parsing (Export-Specific):
Apply Shared Pattern Discovery:
Export-Specific Enhancements:
Think deeply about:
Terminal Output - Domain Diversity Visualization:
After completing pattern discovery, display an ASCII chart showing domain distribution to validate data-driven discovery:
📊 Domain Distribution Analysis
Business & Strategy ████████████░░░░░░░░ 12 patterns (32%)
Creative & Writing ██████████░░░░░░░░░░ 10 patterns (27%)
Image Prompting ████████░░░░░░░░░░░░ 8 patterns (22%)
Learning & Education ████░░░░░░░░░░░░░░░░ 4 patterns (11%)
Recipe & Cooking ██░░░░░░░░░░░░░░░░░░ 2 patterns (5%)
Gaming & Design █░░░░░░░░░░░░░░░░░░░ 1 pattern (3%)
✅ Domain Diversity: 6 distinct topic areas detected
✅ No predefined categorization - domains emerged from your data
This validates that the analysis discovered diverse patterns beyond traditional business/coding domains.
Apply the shared analysis methodology phases:
See shared methodology for complete details.
When processing mixed datasets (both ChatGPT and Claude exports), perform comprehensive deduplication before skill generation.
See shared methodology - Cross-Platform Deduplication for:
Export-Specific Advantages:
Use extended reasoning to optimize skill boundaries and maximize user value.
Apply shared methodology - Prioritization Matrix and boundary optimization strategies.
Export-Specific Enhancements:
Ask user to choose:
Option A: Analysis Report Only
Option B: Complete Implementation Package (Recommended)
Option C: Incremental Implementation
Option D: Custom Specification
Note: If these directories don't exist, they will be automatically created by the analysis process.
Generate timestamped reports in reports/{TIMESTAMP}/:
skills-analysis-log.json (Root directory) - Machine-readable incremental processing dataExample structure:
{
"analysis_date": "YYYY-MM-DDTHH:MM:SSZ",
"platform_detected": "claude|chatgpt|mixed",
"total_conversations": 150,
"report_directory": "reports/2025-01-23_22-40-00",
"conversations_analyzed": [
{
"id": "conv_123",
"platform": "chatgpt|claude",
"file": "data-exports/chatgpt/conversations.json",
"message_count": 45,
"first_message_date": "2024-01-01T10:00:00Z",
"last_message_date": "2024-01-10T14:20:00Z",
"analysis_hash": "sha256:abc123...",
"topics_identified": ["coding", "documentation"],
"patterns_found": 3
}
],
"deduplication_summary": {
"cross_platform_duplicates_removed": 45,
"workflow_instances_merged": 12,
"frequency_adjustments": {
"newsletter_critique": {"before": 1225, "after": 987},
"business_communication": {"before": 709, "after": 643}
}
},
"skills_generated": [
{
"skill_name": "newsletter-critique-specialist",
"source_conversations": ["conv_123", "conv_789"],
"frequency_score": 8,
"impact_score": 9,
"platform_coverage": "both",
"generated_files": [
"generated-skills/newsletter-critique-specialist/SKILL.md",
"generated-skills/newsletter-critique-specialist/reference.md"
]
}
],
"analysis_metadata": {
"total_patterns_identified": 25,
"patterns_consolidated": 8,
"patterns_deduplicated": 6,
"final_skill_count": 5,
"processing_time_minutes": 45
}
}
comprehensive-skills-analysis.md - Complete pattern analysis with skill recommendations and prioritization visualizationimplementation-guide.md - Actionable deployment roadmapReport Visualization Requirements:
Include a Mermaid quadrant chart in comprehensive-skills-analysis.md showing the prioritization matrix:
## 📊 Skill Prioritization Matrix
```mermaid
%%{init: {'theme':'base'}}%%
quadrantChart
title Skill Prioritization: Frequency vs Impact
x-axis Low Frequency --> High Frequency
y-axis Low Impact --> High Impact
quadrant-1 Strategic
quadrant-2 Quick Wins
quadrant-3 Defer
quadrant-4 Automate
[Skill Name 1]: [freq_score/10, impact_score/10]
[Skill Name 2]: [freq_score/10, impact_score/10]
[Skill Name 3]: [freq_score/10, impact_score/10]
```
Legend:
Calculations:
### Generate Skill Packages
For each approved skill, create complete folder structure in `generated-skills/`:
skill-name/ ├── SKILL.md (required - main skill with YAML frontmatter) ├── reference.md (detailed methodology and frameworks) ├── examples.md (additional examples and use cases) ├── templates/ (reusable templates for outputs) │ ├── template-1.md │ └── template-2.md └── scripts/ (utility scripts if applicable) └── helper-script.py
**Auto-creation**: The `generated-skills/` directory will be created automatically when you select Option B or C.
### SKILL.md Generation Template
```yaml
---
name: [skill-name] # Only lowercase letters, numbers, and hyphens
description: [CRITICAL: Must include BOTH what skill does AND when to use it. Written in third person. Include key trigger terms.]
---
# [Skill Name]
## Instructions
[Clear, step-by-step guidance - KEEP UNDER 500 LINES TOTAL]
1. **[Phase 1 Name]**
- [Specific instruction 1]
- [Specific instruction 2]
2. **[Apply Framework/Method]** from [reference.md](reference.md):
- [Framework element 1]
- [Framework element 2]
3. **[Use Templates]** from [templates/](templates/):
- [Template 1 description and usage]
- [Template 2 description and usage]
4. **[Quality Standards]**:
- [Standard 1]
- [Standard 2]
## Examples
### [Example Scenario 1]
**User Request**: "[Realistic user request]"
**Response using methodology**:
[Complete example showing proper skill usage]
For more examples, see [examples.md](examples.md).
For detailed methodology, see [reference.md](reference.md).
All quality standards follow the shared analysis methodology:
Export-Specific Enhancements:
TIMESTAMP=$(date +%Y-%m-%d_%H-%M-%S)mkdir -p reports/{TIMESTAMP}skills-analysis-log.json in root directorydata-exports/chatgpt/ and data-exports/claude/ for available platformsskills-analysis-log.json in root directoryreports/{TIMESTAMP}/comprehensive-skills-analysis.mdreports/{TIMESTAMP}/implementation-guide.mdgenerated-skills/ if requested (Option B/C)scripts/ directoryskills-analysis-log.json (root)reports/{TIMESTAMP}/ directory with analysis reportsgenerated-skills/ directory with skill packagesApply shared methodology quality standards with export-specific validation:
If user provides previous analysis log:
Data Location: JSON files are located in data-exports/chatgpt/ and data-exports/claude/ subdirectories. The system will automatically detect available platform(s) and process files accordingly.