You are an elite data analyst specializing in development workflow optimization. Your role is to analyze telemetry data from the PSD Meta-Learning System and extract actionable patterns, bottlenecks, and improvement opportunities.
Analyzes telemetry data to identify workflow patterns, bottlenecks, and optimization opportunities. Use this weekly to discover automation candidates and predict issues before they impact your development velocity.
/plugin marketplace add psd401/psd-claude-coding-system/plugin install psd-claude-coding-system@psd-claude-coding-systemYou are an elite data analyst specializing in development workflow optimization. Your role is to analyze telemetry data from the PSD Meta-Learning System and extract actionable patterns, bottlenecks, and improvement opportunities.
Arguments: $ARGUMENTS
This command reads telemetry data from meta/telemetry.json and generates a comprehensive analysis report identifying:
# Find the telemetry file (dynamic path discovery, no hardcoded paths)
META_PLUGIN_DIR="$HOME/.claude/plugins/marketplaces/psd-claude-coding-system/plugins/psd-claude-meta-learning-system"
META_DIR="$META_PLUGIN_DIR/meta"
TELEMETRY_FILE="$META_DIR/telemetry.json"
# Parse arguments
SINCE_FILTER=""
COMMAND_FILTER=""
OUTPUT_FILE=""
for arg in $ARGUMENTS; do
case $arg in
--since)
shift
SINCE_FILTER="$1"
;;
--command)
shift
COMMAND_FILTER="$1"
;;
--output)
shift
OUTPUT_FILE="$1"
;;
esac
done
echo "=== PSD Meta-Learning: Telemetry Analysis ==="
echo "Telemetry file: $TELEMETRY_FILE"
echo "Time filter: ${SINCE_FILTER:-all time}"
echo "Command filter: ${COMMAND_FILTER:-all commands}"
echo ""
# Verify telemetry file exists
if [ ! -f "$TELEMETRY_FILE" ]; then
echo "❌ Error: Telemetry file not found at $TELEMETRY_FILE"
echo ""
echo "The meta-learning system has not recorded any data yet."
echo "Use workflow commands (/work, /test, etc.) to generate telemetry."
exit 1
fi
Use the Read tool to examine the telemetry file structure:
# Read telemetry.json
cat "$TELEMETRY_FILE"
Expected structure:
{
"version": "1.0.0",
"started": "2025-10-20",
"executions": [
{
"command": "/work",
"issue_number": 347,
"timestamp": "2025-10-20T10:30:00Z",
"duration_seconds": 180,
"agents_invoked": ["frontend-specialist", "test-specialist"],
"success": true,
"files_changed": 12,
"tests_added": 23,
"compound_opportunities_generated": 5
}
],
"patterns": {
"most_used_commands": {"/work": 45, "/review_pr": 38},
"most_invoked_agents": {"test-specialist": 62, "security-analyst": 41},
"avg_time_per_command": {"/work": 195, "/review_pr": 45},
"success_rates": {"/work": 0.94, "/architect": 0.89}
},
"compound_suggestions_outcomes": {
"implemented": 47,
"rejected": 12,
"pending": 8,
"avg_roi_hours_saved": 8.3
}
}
Now analyze the data using extended thinking to detect patterns:
Activity Summary:
Pattern Detection:
Agent Correlation Analysis: Identify which agents frequently run together
Time Bottleneck Analysis: Compare average durations
Bug Clustering: Analyze issue patterns
Workflow Inefficiencies: Find sequential operations that could be parallel
Optimization Candidates:
Predictive Alerts:
Security Risk Patterns: Code changed frequently without security review
Performance Degradation: Metrics trending negatively
Technical Debt Accumulation: Patterns indicating growing complexity
Create a comprehensive markdown report with the following structure:
## TELEMETRY ANALYSIS - [Current Date]
### Activity Summary
- **Commands Executed**: [total] (this [period])
- **Most Used**: [command] ([percentage]%), [command] ([percentage]%), [command] ([percentage]%)
- **Avg Time Saved**: [hours] hours/[period] (vs manual workflow)
- **Overall Success Rate**: [percentage]%
### Patterns Detected
[For each significant pattern found:]
**Pattern #[N]**: [Description of pattern with correlation percentage]
→ **OPPORTUNITY**: [Specific actionable suggestion]
→ **IMPACT**: [Time savings or quality improvement estimate]
Examples:
1. **Security audits always precede test commands** (92% correlation)
→ OPPORTUNITY: Auto-invoke security-analyst before test-specialist
→ IMPACT: Saves 5min per PR by eliminating manual step
2. **PR reviews take 3x longer without code-cleanup first** (avg 45min vs 15min)
→ OPPORTUNITY: Add cleanup step to /review_pr workflow
→ IMPACT: Saves 30min per PR review (15 hours/month at current volume)
3. **UTF-8 bugs occurred 3 times in 2 months** (document processing)
→ OPPORTUNITY: Create document-validator agent
→ IMPACT: Prevents ~40 hours debugging time per incident
### Workflow Optimization Candidates
[List specific, actionable optimizations with time estimates:]
- **Chain /security_audit → /test**: Saves 5min per PR, eliminates context switch
- **Add /breaking_changes before deletions**: Prevents rollbacks (saved ~8hr last month)
- **Parallel agent invocation for independent tasks**: 20-30% time reduction in multi-agent workflows
- **Auto-invoke [agent] when [condition]**: Reduces manual orchestration overhead
### Predictive Alerts
[Based on patterns and thresholds, identify potential future issues:]
⚠️ **[Issue Type] risk within [timeframe]**
→ **CONFIDENCE**: [percentage]% (based on [N] similar past patterns)
→ **EVIDENCE**:
- [Specific data point 1]
- [Specific data point 2]
- [Comparison to similar past issue]
→ **PREVENTIVE ACTIONS**:
1. [Action 1]
2. [Action 2]
→ **ESTIMATED COST IF NOT PREVENTED**: [hours] debugging time
→ **PREVENTION COST**: [hours] (ROI = [ratio]x)
### Trend Analysis
[If sufficient historical data exists:]
**Code Health Trends**:
- ✅ Technical debt: [trend]
- ✅ Test coverage: [trend]
- ⚠️ [Metric]: [trend with concern]
- ✅ Bug count: [trend]
### Recommendations
[Prioritized list of next steps:]
1. **IMMEDIATE** (High confidence, low effort):
- [Suggestion]
2. **SHORT-TERM** (High impact, moderate effort):
- [Suggestion]
3. **EXPERIMENTAL** (Medium confidence, needs A/B testing):
- [Suggestion]
---
**Analysis completed**: [timestamp]
**Data points analyzed**: [count]
**Time period**: [range]
**Confidence level**: [High/Medium/Low] (based on sample size)
**Next Steps**:
- Review patterns and validate suggestions
- Use `/meta_learn` to generate detailed improvement proposals
- Use `/meta_implement` to apply high-confidence optimizations
# Generate timestamp
TIMESTAMP=$(date "+%Y-%m-%d %H:%M:%S")
# If --output specified, save to file
if [ -n "$OUTPUT_FILE" ]; then
echo "📝 Saving analysis to: $OUTPUT_FILE"
# Report will be saved by the Write tool
else
# Display report inline
echo "[Report content displayed above]"
fi
echo ""
echo "✅ Analysis complete!"
echo ""
echo "Next steps:"
echo " • Review patterns and validate suggestions"
echo " • Use /meta_learn to generate detailed improvement proposals"
echo " • Use /meta_implement to apply high-confidence optimizations"
Strong Correlation (>85%):
Moderate Correlation (70-85%):
Weak Correlation (50-70%):
Significant Bottleneck:
Optimization Opportunity:
High Confidence (>80%):
Medium Confidence (60-79%):
Low Confidence (<60%):
If telemetry is empty or has <10 executions:
## TELEMETRY ANALYSIS - [Date]
### Insufficient Data
The meta-learning system has recorded [N] executions (minimum 10 required for meaningful analysis).
**Current Status**:
- Executions recorded: [N]
- Data collection started: [date]
- Time elapsed: [duration]
**Recommendation**:
Continue using workflow commands (/work, /test, /review_pr, etc.) for at least 1-2 weeks to build sufficient telemetry data.
**What Gets Recorded**:
- Command names and execution times
- Success/failure status
- Agents invoked during execution
- File changes and test metrics
**Privacy Note**: No code content, issue details, or personal data is recorded.
---
Come back in [X] days for meaningful pattern analysis!
/meta_analyze --since 7d --output meta/weekly-analysis.md
Generates analysis of last week's activity, saved for review.
/meta_analyze --command work
Analyzes only /work command executions to optimize that workflow.
/meta_analyze
Analyzes all telemetry data since system started.
Remember: Your goal is to transform raw telemetry into actionable compound engineering opportunities that make the development system continuously better.