From retell
This skill should be used when parsing Claude Code conversation files, reading conversation history, working with ".jsonl" conversation transcripts, extracting signal from conversation data, filtering noise entries, linking subagent files, detecting session boundaries, or understanding the Claude Code conversation storage format. Provides the JSONL schema, entry types, content block extraction rules, user message filtering logic, and subagent linking patterns needed by the retell pipeline.
npx claudepluginhub oborchers/fractional-cto --plugin retellThis skill uses the workspace's default tool permissions.
Claude Code stores conversation transcripts as JSONL files in `~/.claude/projects/<encoded-path>/<uuid>.jsonl`. Each line is a JSON object with a `type` field that determines its shape and narrative value. A typical 15 MB conversation contains only ~338 KB of signal (~2.2%). The parser's job is to extract that signal deterministically, at zero token cost.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Performs token-optimized structural code search using tree-sitter AST parsing to discover symbols, outline files, and unfold code without reading full files.
Claude Code stores conversation transcripts as JSONL files in ~/.claude/projects/<encoded-path>/<uuid>.jsonl. Each line is a JSON object with a type field that determines its shape and narrative value. A typical 15 MB conversation contains only ~338 KB of signal (~2.2%). The parser's job is to extract that signal deterministically, at zero token cost.
Main conversations:
~/.claude/projects/<encoded-path>/<conversation-uuid>.jsonl
Path encoding replaces / with - in the project's absolute path:
/Users/oliver/Desktop/Vault.nosync/vault encodes to -Users-oliver-Desktop-Vault-nosync-vaultSubagent transcripts (same JSONL format):
<conversation-uuid>/subagents/agent-<agent-id>.jsonl
Compact summaries — system-generated conversation recaps from /compact. Filename pattern: agent-acompact-*.jsonl. These are NOT real subagent research. Skip them as subagent content, but they can serve as chapter bridges (they are high-quality "previously on..." summaries).
| Type | Key fields | Narrative value |
|---|---|---|
user | message.content (string or content blocks) | Human requests, reactions, pivots — drives the story |
assistant | message.content (array of content blocks) | Responses, decisions, deliverables — the action |
system with subtype: turn_duration | durationMs (milliseconds) | Pacing metadata ("after 3 minutes of research..."). Typical range: 5,000-300,000 ms. |
| Type | Why skip |
|---|---|
progress | Hook events, intermediate states — no narrative value |
file-history-snapshot | Undo/restore snapshots — internal bookkeeping |
queue-operation | Internal scheduling — never user-visible |
system (other subtypes) | Metadata with no narrative content |
Assistant messages contain an array of content blocks. Extract in this priority:
| Block type | Fields | Extraction rule |
|---|---|---|
text | type, text | Always extract — visible response to user |
thinking | type, thinking, signature | Extract selectively — internal reasoning, reveals decision-making. Use for "behind the scenes" narrative depth, but not every thinking block is interesting. Mundane implementation details should be skipped. |
tool_use | type, id, name, input | Extract name only — shows what actions were taken. Drop input (verbose). |
User message content can be a plain string OR an array of content blocks:
| Block type | Fields | Extraction rule |
|---|---|---|
text | type, text | Always extract — the user's actual words |
tool_result | type, tool_use_id, content, is_error | Drop in most cases — raw tool output (file listings, grep results). Exception: check for embedded images and agentId patterns (see below). |
Playwright screenshots are embedded as base64 in tool_result blocks:
{
"type": "tool_result",
"content": [
{
"type": "image",
"source": {
"type": "base64",
"media_type": "image/png",
"data": "iVBORw0KGgo..." // base64-encoded image
}
}
]
}
The parser extracts these to an assets/ directory and references them by filename in the event stream. Images are never sent to the LLM — they are extracted by the script and made available for the author to include in the final post.
Apply these filters in order to separate real user input from system noise:
tool_result blocks (no text block present) — these are permission grants or pure tool output<local-command- or <command-name> tags — CLI command output (/compact, /exit, etc.)<system-reminder> tags from remaining text — injected system context, not user wordsA single JSONL file can span multiple context windows. Detect boundaries by:
sessionId changes — the field shifts to a new UUID between messages/compact commands — <command-name>/compact</command-name> indicates mid-session context compressionThese are natural chapter breaks in the narrative. For editorial guidance on using them, see the narrative-craft skill.
The main conversation's tool_use for an Agent call does NOT contain the subagent ID. The agentId appears only in the subsequent tool_result.
Example — the Agent tool_use block (in assistant message):
{
"type": "tool_use",
"id": "toolu_01ABC123...",
"name": "Agent",
"input": {
"description": "Brand color research",
"prompt": "Research color palettes for a premium SaaS brand...",
"subagent_type": "general-purpose"
}
}
Example — the matching tool_result (in next user message):
{
"type": "tool_result",
"tool_use_id": "toolu_01ABC123...",
"content": "...agent output text...\n\nagentId: a4830b373be1203a0 (for resuming...)\n<usage>total_tokens: 51009\ntool_uses: 12\nduration_ms: 229671</usage>"
}
Linking procedure:
tool_use blocks with name: "Agent" — note the id and input.descriptiontool_result with matching tool_use_idagentId from the tool_result text: agentId:\s*([a-f0-9]+)agent-{agentId}.jsonl<usage> block — total_tokens and duration_ms are useful for pacing narrativeThe parser scans extracted signal for common secret patterns: sk-*, ghp_*, AKIA*, Bearer *, password=*. Matches appear as pii_warnings in the manifest. The parser flags but does not auto-redact — false positives are likely. The author decides what to redact at the triage gate.
The retell plugin includes a deterministic parser at ${CLAUDE_PLUGIN_ROOT}/scripts/parse-conversation.py:
# Parse a conversation by UUID prefix
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/parse-conversation.py 8c439a20
# Parse with full path and custom output
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/parse-conversation.py /path/to/file.jsonl --output-dir ./artifacts
# Include subagent events
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/parse-conversation.py 8c439a20 --include-subagents
Output: events.json (ordered signal events) and manifest.json (metadata + token estimates + PII warnings).
Discover conversations worth turning into blog posts:
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/preview-conversations.py # Last 10
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/preview-conversations.py 20 # Last 20
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/preview-conversations.py --json # Machine-readable
- appears in both encoded / and real directory names. Validate decoded paths against the filesystem.estimation_method for transparency.narrative-craft skill for usage guidelines.