Extract successful prompts from session transcripts for learning and prompt library building
Extracts successful prompts from session transcripts to build a personal prompt library.
/plugin marketplace add melodic-software/claude-code-plugins/plugin install claude-code-observability@melodic-software[--successful-only] [--category CATEGORY] [--days N] [--export FILE]Extract prompts from session transcripts to build a personal prompt library and learn from successful patterns.
| Argument | Description |
|---|---|
--successful-only | Only extract prompts that led to successful outcomes |
--category CATEGORY | Filter by task category (code-review, refactor, feature, bug, etc.) |
--days N | Extract from sessions in last N days (default: 30) |
--export FILE | Export extracted prompts to markdown file |
--min-length N | Minimum prompt length in characters (default: 20) |
| (no args) | Extract all prompts from recent sessions |
A prompt is considered successful when the resulting session shows:
| Indicator | Weight |
|---|---|
| Task completed without errors | High |
| No user corrections required | High |
| No "let me try again" from assistant | Medium |
| No follow-up "that's wrong" from user | Medium |
| Minimal retries/iterations | Medium |
| Explicit user approval ("thanks", "perfect") | Low |
from pathlib import Path
from datetime import datetime, timezone, timedelta
claude_dir = Path.home() / ".claude"
projects_dir = claude_dir / "projects"
# Get sessions from specified time range
cutoff = datetime.now(timezone.utc) - timedelta(days=30)
sessions = []
for project_dir in projects_dir.iterdir():
if project_dir.is_dir():
for session_file in project_dir.glob("*.jsonl"):
if not session_file.name.startswith("agent-"):
mtime = datetime.fromtimestamp(session_file.stat().st_mtime, tz=timezone.utc)
if mtime > cutoff:
sessions.append({
"path": session_file,
"project": project_dir.name,
"modified": mtime
})
import json
def extract_prompts(session_path):
"""Extract user prompts from session."""
prompts = []
messages = []
with open(session_path) as f:
for line in f:
try:
record = json.loads(line)
except json.JSONDecodeError:
continue
record_type = record.get("type", "")
if record_type == "user":
content = record.get("message", {}).get("content", "")
messages.append({"role": "user", "content": content})
elif record_type == "assistant":
content = record.get("message", {}).get("content", "")
messages.append({"role": "assistant", "content": content})
# Extract prompts with context
for i, msg in enumerate(messages):
if msg["role"] == "user" and len(msg["content"]) >= min_length:
# Get assistant response for success analysis
response = messages[i + 1]["content"] if i + 1 < len(messages) else ""
prompts.append({
"prompt": msg["content"],
"response_preview": response[:200],
"position": i,
"successful": analyze_success(msg["content"], response, messages[i:i+4])
})
return prompts
def analyze_success(prompt, response, subsequent_messages):
"""Determine if a prompt led to successful outcome."""
score = 100 # Start with assumption of success
# Check for failure indicators in response
failure_phrases = [
"error", "failed", "let me try again", "sorry",
"that didn't work", "I apologize", "mistake"
]
for phrase in failure_phrases:
if phrase in response.lower():
score -= 20
# Check for user corrections in subsequent messages
for msg in subsequent_messages:
if msg.get("role") == "user":
content = msg.get("content", "").lower()
correction_phrases = [
"no", "wrong", "that's not", "actually",
"I meant", "try again", "fix"
]
for phrase in correction_phrases:
if phrase in content:
score -= 15
# Check for positive indicators
positive_phrases = ["thanks", "perfect", "great", "works", "done"]
for msg in subsequent_messages:
if msg.get("role") == "user":
for phrase in positive_phrases:
if phrase in msg.get("content", "").lower():
score += 10
return score >= 70 # Threshold for "successful"
def categorize_prompt(prompt_text):
"""Auto-categorize prompt by keywords."""
categories = {
"code-review": ["review", "check", "audit", "look at"],
"refactor": ["refactor", "clean up", "improve", "optimize"],
"feature": ["add", "implement", "create", "build", "new"],
"bug": ["fix", "bug", "error", "issue", "broken"],
"documentation": ["document", "readme", "docs", "explain"],
"test": ["test", "spec", "coverage", "unit test"],
"configuration": ["config", "setup", "install", "configure"]
}
prompt_lower = prompt_text.lower()
for category, keywords in categories.items():
if any(kw in prompt_lower for kw in keywords):
return category
return "general"
# Extracted Prompts
**Date Range:** 2025-12-01 to 2025-12-30
**Sessions Analyzed:** 45
**Prompts Extracted:** 127
**Successful Prompts:** 98 (77%)
---
## High-Quality Prompts (Successful)
### Feature Development
**Prompt #1** (2025-12-28, web-app)
> Implement a user authentication system with JWT tokens. Use the
> existing User model and add login/logout endpoints. Include refresh
> token functionality and proper error handling for expired tokens.
**Success Score:** 95/100
**Why it worked:** Specific requirements, clear scope, mentions existing code
---
**Prompt #2** (2025-12-25, api-server)
> Add rate limiting middleware to all API endpoints. Use a sliding
> window algorithm with 100 requests per minute per IP. Return 429
> status with Retry-After header when limit exceeded.
**Success Score:** 90/100
**Why it worked:** Technical specificity, clear limits, expected behavior defined
---
### Code Review
**Prompt #3** (2025-12-27, claude-plugins)
> Review the authentication module in src/auth/ for security issues.
> Focus on input validation, token handling, and password storage.
> Check against OWASP top 10.
**Success Score:** 88/100
**Why it worked:** Specific focus areas, reference to standards
---
## Prompts That Needed Iteration
**Prompt #4** (2025-12-20, web-app)
> Fix the login bug
**Success Score:** 45/100
**Issues:** Too vague, required follow-up questions
**Better version:**
> Fix the login bug where users get a 500 error when submitting the
> form with special characters in the password field. The error is
> in src/auth/login.ts around line 45.
---
## Prompt Patterns
### What Works
| Pattern | Example | Success Rate |
|---------|---------|--------------|
| Specific file paths | "in src/auth/login.ts" | 89% |
| Clear acceptance criteria | "should return 200 with user data" | 85% |
| Reference existing code | "use the existing User model" | 82% |
| Mention standards | "check against OWASP" | 80% |
### What Doesn't Work
| Pattern | Example | Success Rate |
|---------|---------|--------------|
| Vague requests | "fix the bug" | 34% |
| No context | "add authentication" | 41% |
| Multiple unrelated tasks | "fix login and also add dark mode" | 38% |
--export)# My Prompt Library
Generated: 2025-12-30
Source: Claude Code session transcripts
## Feature Development Prompts
### Authentication
```text
Implement a user authentication system with JWT tokens. Use the
existing User model and add login/logout endpoints. Include refresh
token functionality and proper error handling for expired tokens.
Success Rate: 95% | Sessions: 3 uses
Add rate limiting middleware to all API endpoints. Use a sliding
window algorithm with 100 requests per minute per IP. Return 429
status with Retry-After header when limit exceeded.
Success Rate: 90% | Sessions: 2 uses
Review the [module] in [path] for security issues. Focus on input
validation, token handling, and password storage. Check against
OWASP top 10.
Template variables: [module], [path] Success Rate: 88% | Sessions: 5 uses
Fix the [error type] where [symptom]. The error occurs when
[trigger condition]. The relevant code is in [file path] around
line [N]. Here's the error message: [error]
Template variables: [error type], [symptom], etc. Success Rate: 82% | Sessions: 8 uses
/user-config:retrospective - Full session analysis/user-config:transcript-search - Search for specific topics/user-config:history - View command historyThis command uses the user-config-management skill for: