Analyzes Claude Code transcripts to identify behavioral issues (errors, loops, scope creep, inefficiency). Generates post-mortem reports with root cause analysis.
/plugin marketplace add rcrsr/snoop/plugin install snoop@rcrsrAnalyzes single-turn transcripts to identify issues and generate structured post-mortem reports.
JSONL with one message per line:
| Field | Description |
|---|---|
type | "user" or "assistant" |
timestamp | ISO timestamp |
message.content | Array of blocks: thinking, text, tool_use, tool_result |
Transcripts often exceed token limits. Survey first, then read targeted chunks.
Survey commands:
wc -l transcript.jsonl # Message count
jq -r '.type' transcript.jsonl | sort | uniq -c # Type distribution
grep -n '"is_error":true' transcript.jsonl | cut -d: -f1 # Error line numbers
Targeted reads:
sed -n '40,55p' transcript.jsonl | jq -s '.' # Lines around error
head -20 transcript.jsonl | jq -s '.' # First messages
tail -20 transcript.jsonl | jq -s '.' # Last messages
| Category | Indicators |
|---|---|
| Errors/Failures | is_error":true, hook blocks, unhandled exceptions |
| Thinking Loops | Repeated reasoning without progress, circular logic |
| Trial-and-Error | Random attempts without diagnosis, no hypothesis-test-conclude |
| Scope Creep | Implementing unrequested features, refactoring unrelated code |
| Inefficient Tools | Same file read multiple times, redundant searches |
| Incomplete Work | Started tasks with no completion, missing verification |
Example issue detection:
{"type":"assistant","message":{"content":[
{"type":"tool_use","name":"Read","input":{"file_path":"src/auth.ts"}},
{"type":"tool_use","name":"Read","input":{"file_path":"src/auth.ts"}}
]}}
Issue: Redundant file read — same file read twice in one response.
| Severity | Criteria |
|---|---|
| Critical | Task fails to complete, data loss possible, security vulnerability |
| High | Significant wasted effort (>50% of session), incorrect implementation |
| Medium | Inefficiency or minor deviation from best practices |
| Low | Style issues, optimization opportunities |
# Post-Mortem Report
**Transcript**: {filepath}
**Duration**: {start} to {end}
**Messages**: {total} ({user} user, {assistant} assistant)
## Executive Summary
{2-3 sentences: what happened, primary issues}
## Timeline
| Time | Event | Notes |
| ---- | ----- | ----- |
| HH:MM:SS | {event} | {context} |
## Issues Identified
### [{SEVERITY}] {Issue Title}
**Category**: {category}
**Location**: Messages {N}-{M}
**Description**: {what happened}
**Impact**: {consequences}
## Root Cause Analysis
{Why did these issues occur?}
## Recommendations
1. {Specific actionable improvement}
2. {Specific actionable improvement}
## Metrics
- Tool calls: {count}
- Errors: {count}
- Files read: {unique count}
- Files modified: {count}
Example issue output:
### [HIGH] Redundant File Reads
**Category**: Inefficient Tools
**Location**: Messages 12-14
**Description**: src/auth.ts read 3 times within 2 minutes
**Impact**: Wasted ~15K tokens, slowed analysis
Use this agent when analyzing conversation transcripts to find behaviors worth preventing with hooks. Examples: <example>Context: User is running /hookify command without arguments user: "/hookify" assistant: "I'll analyze the conversation to find behaviors you want to prevent" <commentary>The /hookify command without arguments triggers conversation analysis to find unwanted behaviors.</commentary></example><example>Context: User wants to create hooks from recent frustrations user: "Can you look back at this conversation and help me create hooks for the mistakes you made?" assistant: "I'll use the conversation-analyzer agent to identify the issues and suggest hooks." <commentary>User explicitly asks to analyze conversation for mistakes that should be prevented.</commentary></example>