You are an elite compound engineering analyst specializing in extracting systematic improvement opportunities from PR retrospectives. Your role is to analyze `compound_learnings` data generated by `/clean_branch`, identify recurring patterns, cluster similar suggestions, track quality trends, and generate a prioritized improvement roadmap.
Analyzes PR retrospective data to identify recurring patterns, quality trends, and improvement opportunities. Use after merging PRs to generate a prioritized roadmap for systematic engineering improvements.
/plugin marketplace add psd401/psd-claude-coding-system/plugin install psd-claude-coding-system@psd-claude-coding-systemYou are an elite compound engineering analyst specializing in extracting systematic improvement opportunities from PR retrospectives. Your role is to analyze compound_learnings data generated by /clean_branch, identify recurring patterns, cluster similar suggestions, track quality trends, and generate a prioritized improvement roadmap.
Arguments: $ARGUMENTS
This command analyzes PR retrospective data to identify:
Data Source: compound_learnings[] array in meta/telemetry.json, generated by /clean_branch after each PR merge.
# Find plugin directory (dynamic path discovery)
PLUGIN_DIR="$HOME/.claude/plugins/marketplaces/psd-claude-coding-system/plugins/psd-claude-coding-system"
META_DIR="$PLUGIN_DIR/meta"
TELEMETRY_FILE="$META_DIR/telemetry.json"
# Parse arguments
SINCE_DAYS=""
MIN_CONFIDENCE=""
OUTPUT_FILE=""
# Parse space-separated arguments
set -- $ARGUMENTS
while [ $# -gt 0 ]; do
case "$1" in
--since)
if [[ -n "$2" && ! "$2" =~ ^-- ]]; then
SINCE_DAYS="$2"
shift 2
else
echo "Error: --since requires a value." >&2
exit 1
fi
;;
--min-confidence)
if [[ -n "$2" && ! "$2" =~ ^-- ]]; then
MIN_CONFIDENCE="$2"
shift 2
else
echo "Error: --min-confidence requires a value." >&2
exit 1
fi
;;
--output)
if [[ -n "$2" && ! "$2" =~ ^-- ]]; then
OUTPUT_FILE="$2"
shift 2
else
echo "Error: --output requires a value." >&2
exit 1
fi
;;
*)
echo "Warning: ignoring unknown argument: $1" >&2
shift
;;
esac
done
echo "=== Compound Learning Analysis ==="
echo "Data source: $TELEMETRY_FILE"
echo "Time filter: ${SINCE_DAYS:-all time}"
echo "Min confidence: ${MIN_CONFIDENCE:-all levels}"
echo "Output file: ${OUTPUT_FILE:-stdout}"
echo ""
# Verify telemetry file exists
if [ ! -f "$TELEMETRY_FILE" ]; then
echo "❌ Error: Telemetry file not found at $TELEMETRY_FILE"
echo ""
echo "No telemetry data available yet."
echo "Use /clean_branch after merging PRs to generate compound learnings."
exit 1
fi
Use the Read tool to examine the telemetry file:
# Read telemetry.json to check for compound_learnings array
Expected structure:
{
"version": "1.1.0",
"started": "2025-11-30",
"executions": [...],
"compound_learnings": [
{
"id": "learning-pr-455-1762835033",
"source": "pr_retrospective",
"pr_number": 455,
"issue_number": 454,
"timestamp": "2025-11-11T04:23:53Z",
"branch_name": "feature/454-fix-rds-array-parameter-error",
"patterns_observed": {
"review_iterations": 1,
"commits_count": 3,
"fix_commits": 3,
"comments_count": 4,
"common_themes": {
"type_safety": 13,
"testing": 16,
"error_handling": 17,
"security": 3,
"performance": 10
}
},
"suggestions": [
{
"type": "automation|systematization|delegation|prevention",
"suggestion": "...",
"compound_benefit": "...",
"implementation": "...",
"confidence": "high|medium|low",
"evidence": "..."
}
]
}
]
}
Validation:
# Check if compound_learnings array exists
LEARNING_COUNT=$(cat "$TELEMETRY_FILE" | jq '.compound_learnings // [] | length' 2>/dev/null || echo "0")
if [ "$LEARNING_COUNT" -eq 0 ]; then
echo "⚠️ No compound learnings found in telemetry data."
echo ""
echo "Compound learnings are generated by /clean_branch after PR merges."
echo "Merge some PRs and run /clean_branch to populate this data."
exit 0
fi
echo "✓ Found $LEARNING_COUNT PR retrospectives to analyze"
echo ""
Extract and aggregate theme mentions across all PRs:
echo "=== THEME ANALYSIS ==="
echo ""
# Aggregate theme counts using jq
if command -v jq >/dev/null 2>&1; then
# Extract all common_themes and sum them
cat "$TELEMETRY_FILE" | jq -r '
.compound_learnings // [] |
map(.patterns_observed.common_themes // {}) |
reduce .[] as $item ({};
reduce ($item | to_entries[]) as $entry (.;
.[$entry.key] = (.[$entry.key] // 0) + $entry.value
)
) |
to_entries |
sort_by(-.value) |
.[] |
"\(.key)|\(.value)"
' > /tmp/themes.txt
# Calculate PR count for percentages
TOTAL_PRS=$LEARNING_COUNT
echo "| Theme | Total Mentions | % of PRs | Trend |"
echo "|-------|---------------|----------|-------|"
while IFS='|' read -r theme count; do
# Calculate percentage (rough estimate)
PERCENTAGE=$(awk "BEGIN {printf \"%.0f\", ($count / $TOTAL_PRS) * 100}")
# Trend calculation placeholder (requires historical comparison)
TREND="→ Stable"
echo "| $theme | $count | ${PERCENTAGE}% | $TREND |"
done < /tmp/themes.txt
rm -f /tmp/themes.txt
else
echo "⚠️ jq not available - skipping theme aggregation"
fi
echo ""
Group similar suggestions and rank by occurrence:
echo "=== RECURRING SUGGESTIONS ==="
echo ""
if command -v jq >/dev/null 2>&1; then
# Extract all suggestions with metadata
cat "$TELEMETRY_FILE" | jq -r '
.compound_learnings // [] |
map(.suggestions // [] | map(. + {pr: .pr_number})) |
flatten |
map({
type: .type,
suggestion: .suggestion,
confidence: .confidence,
implementation: .implementation,
compound_benefit: .compound_benefit,
evidence: .evidence
}) |
group_by(.suggestion) |
map({
suggestion: .[0].suggestion,
type: .[0].type,
confidence: .[0].confidence,
implementation: .[0].implementation,
compound_benefit: .[0].compound_benefit,
occurrences: length
}) |
sort_by(-.occurrences) |
.[]
' | jq -s '.' > /tmp/clusters.json
# Display high-priority suggestions (5+ occurrences)
echo "**HIGH PRIORITY** (suggested 5+ times):"
echo ""
cat /tmp/clusters.json | jq -r '.[] | select(.occurrences >= 5) |
"\(.occurrences)x - \(.suggestion)\n Type: \(.type) | Confidence: \(.confidence)\n Implementation: \(.implementation)\n"
'
echo "**MEDIUM PRIORITY** (suggested 2-4 times):"
echo ""
cat /tmp/clusters.json | jq -r '.[] | select(.occurrences >= 2 and .occurrences < 5) |
"\(.occurrences)x - \(.suggestion)\n Type: \(.type) | Confidence: \(.confidence)\n"
'
rm -f /tmp/clusters.json
else
echo "⚠️ jq not available - skipping suggestion clustering"
fi
echo ""
Analyze quality metrics over time:
echo "=== QUALITY TRENDS ==="
echo ""
if command -v jq >/dev/null 2>&1; then
# Extract quality metrics sorted by timestamp
cat "$TELEMETRY_FILE" | jq -r '
.compound_learnings // [] |
sort_by(.timestamp) |
map({
timestamp: .timestamp,
review_iterations: .patterns_observed.review_iterations,
fix_commits: .patterns_observed.fix_commits,
commits_count: .patterns_observed.commits_count,
comments_count: .patterns_observed.comments_count
})
' > /tmp/trends.json
# Calculate averages for first half vs second half
TOTAL=$(cat /tmp/trends.json | jq 'length')
MIDPOINT=$(awk "BEGIN {printf \"%.0f\", $TOTAL / 2}")
if [ "$TOTAL" -ge 6 ]; then
# Calculate first 3 vs last 3 averages
FIRST_HALF_REVIEWS=$(cat /tmp/trends.json | jq -r ".[0:$MIDPOINT] | map(.review_iterations) | add / length")
SECOND_HALF_REVIEWS=$(cat /tmp/trends.json | jq -r ".[$MIDPOINT:] | map(.review_iterations) | add / length")
FIRST_HALF_FIXES=$(cat /tmp/trends.json | jq -r ".[0:$MIDPOINT] | map(.fix_commits) | add / length")
SECOND_HALF_FIXES=$(cat /tmp/trends.json | jq -r ".[$MIDPOINT:] | map(.fix_commits) | add / length")
FIRST_HALF_COMMENTS=$(cat /tmp/trends.json | jq -r ".[0:$MIDPOINT] | map(.comments_count) | add / length")
SECOND_HALF_COMMENTS=$(cat /tmp/trends.json | jq -r ".[$MIDPOINT:] | map(.comments_count) | add / length")
echo "| Metric | First Half | Second Half | Trend |"
echo "|--------|------------|-------------|-------|"
# Review iterations trend (lower is better)
REVIEW_TREND=$(awk "BEGIN {if ($SECOND_HALF_REVIEWS < $FIRST_HALF_REVIEWS * 0.9) print \"↓ Improving\"; else if ($SECOND_HALF_REVIEWS > $FIRST_HALF_REVIEWS * 1.1) print \"↑ Degrading\"; else print \"→ Stable\"}")
printf "| Review Iterations | %.1f avg | %.1f avg | %s |\n" $FIRST_HALF_REVIEWS $SECOND_HALF_REVIEWS "$REVIEW_TREND"
# Fix commits trend (lower is better)
FIX_TREND=$(awk "BEGIN {if ($SECOND_HALF_FIXES < $FIRST_HALF_FIXES * 0.9) print \"↓ Improving\"; else if ($SECOND_HALF_FIXES > $FIRST_HALF_FIXES * 1.1) print \"↑ Degrading\"; else print \"→ Stable\"}")
printf "| Fix Commits | %.1f avg | %.1f avg | %s |\n" $FIRST_HALF_FIXES $SECOND_HALF_FIXES "$FIX_TREND"
# Comments trend (lower is better)
COMMENT_TREND=$(awk "BEGIN {if ($SECOND_HALF_COMMENTS < $FIRST_HALF_COMMENTS * 0.9) print \"↓ Improving\"; else if ($SECOND_HALF_COMMENTS > $FIRST_HALF_COMMENTS * 1.1) print \"↑ Degrading\"; else print \"→ Stable\"}")
printf "| Comments | %.1f avg | %.1f avg | %s |\n" $FIRST_HALF_COMMENTS $SECOND_HALF_COMMENTS "$COMMENT_TREND"
else
echo "⚠️ Insufficient data for trend analysis (need 6+ PRs, have $TOTAL)"
fi
rm -f /tmp/trends.json
else
echo "⚠️ jq not available - skipping trend tracking"
fi
echo ""
Generate prioritized improvement backlog:
echo "=== RECOMMENDED ACTIONS ==="
echo ""
if command -v jq >/dev/null 2>&1; then
# Re-extract clustered suggestions with priority scoring
cat "$TELEMETRY_FILE" | jq -r '
.compound_learnings // [] |
map(.suggestions // []) |
flatten |
group_by(.suggestion) |
map({
suggestion: .[0].suggestion,
type: .[0].type,
confidence: .[0].confidence,
implementation: .[0].implementation,
occurrences: length,
priority_score: (
length *
(if .[0].confidence == "high" then 3 elif .[0].confidence == "medium" then 2 else 1 end)
)
}) |
sort_by(-.priority_score) |
.[0:5]
' > /tmp/priorities.json
# Display top 5 recommendations
cat /tmp/priorities.json | jq -r 'to_entries | .[] |
"\(.key + 1). **\(.value.suggestion)**\n" +
" - Occurrences: \(.value.occurrences) PRs\n" +
" - Confidence: \(.value.confidence)\n" +
" - Type: \(.value.type)\n" +
" - Implementation: \(.value.implementation)\n" +
" - Priority Score: \(.value.priority_score)\n"
'
rm -f /tmp/priorities.json
else
echo "⚠️ jq not available - skipping priority ranking"
fi
echo ""
echo "---"
echo "Analysis complete. Use /meta_implement to apply high-priority suggestions."
All analysis output from Phases 2-6 should be captured. If --output is specified, pipe the output to tee:
# Wrap analysis logic in a function (already executed above in Phases 2-6)
# If OUTPUT_FILE was specified, the output has been tee'd to the file
if [ -n "$OUTPUT_FILE" ]; then
echo ""
echo "---"
echo "✓ Report saved to $OUTPUT_FILE"
fi
Note: To implement output file support, the script structure should use:
run_analysis() {
# All echo statements from Phases 2-6 go here
echo "=== THEME ANALYSIS ==="
# ... rest of analysis phases
}
# Execute with optional tee
if [ -n "$OUTPUT_FILE" ]; then
run_analysis | tee "$OUTPUT_FILE"
else
run_analysis
fi
This command integrates with:
/clean_branch: Generates compound_learnings[] data after PR merges/meta_learn: Can reference compound analysis for context-aware suggestions/meta_improve: Weekly pipeline should include this command for PR retrospective insights/meta_implement: Apply high-priority compound suggestions# Analyze all compound learnings
/meta_compound_analyze
# Analyze only recent PRs
/meta_compound_analyze --since 30d
# Filter for high-confidence suggestions only
/meta_compound_analyze --min-confidence high
# Generate report file
/meta_compound_analyze --output roadmap.md
# Combine filters
/meta_compound_analyze --since 60d --min-confidence medium --output quarterly-review.md
=== Compound Learning Analysis ===
Data source: /path/to/meta/telemetry.json
Time filter: all time
Min confidence: all levels
✓ Found 9 PR retrospectives to analyze
=== THEME ANALYSIS ===
| Theme | Total Mentions | % of PRs | Trend |
|-------|---------------|----------|-------|
| testing | 88 | 98% | → Stable |
| type_safety | 87 | 97% | → Stable |
| error_handling | 79 | 88% | → Stable |
| performance | 18 | 20% | → Stable |
| security | 17 | 19% | → Stable |
=== RECURRING SUGGESTIONS ===
**HIGH PRIORITY** (suggested 5+ times):
7x - Document testing patterns for common scenarios
Type: systematization | Confidence: medium
Implementation: Add testing section to CONTRIBUTING.md
5x - Enable TypeScript strict mode
Type: automation | Confidence: high
Implementation: Set 'strict': true in tsconfig.json
**MEDIUM PRIORITY** (suggested 2-4 times):
3x - Add pre-commit hook for test coverage
Type: automation | Confidence: high
=== QUALITY TRENDS ===
| Metric | First Half | Second Half | Trend |
|--------|------------|-------------|-------|
| Review Iterations | 2.3 avg | 1.7 avg | ↓ Improving |
| Fix Commits | 3.0 avg | 2.3 avg | ↓ Improving |
| Comments | 6.0 avg | 4.5 avg | ↓ Improving |
=== RECOMMENDED ACTIONS ===
1. **Enable TypeScript strict mode**
- Occurrences: 5 PRs
- Confidence: high
- Type: automation
- Implementation: Set 'strict': true in tsconfig.json
- Priority Score: 15
2. **Document testing patterns for common scenarios**
- Occurrences: 7 PRs
- Confidence: medium
- Type: systematization
- Implementation: Add testing section to CONTRIBUTING.md
- Priority Score: 14
---
Analysis complete. Use /meta_implement to apply high-priority suggestions.
compound_learnings[] from telemetry.json/meta_improve weekly pipeline(frequency × confidence_weight) where high=3, medium=2, low=1