/copilot - Fast PR Processing
Executes ultra-fast PR processing with hybrid orchestration for comprehensive coverage and quality assurance.
/plugin marketplace add jleechanorg/claude-commands/plugin install claude-commands@claude-commands-marketplaceWhen this command is invoked, YOU (Claude) must execute these steps immediately: This is NOT documentation - these are COMMANDS to execute right now. Use TodoWrite to track progress through multi-phase workflows.
Action Steps:
Action Steps:
/gstatus, /commentfetch for PR status and comment analysiscopilot-fixpr agent for parallel file operations/gstatus - Get comprehensive PR status/commentfetch - Gather all PR comments/fixpr - Resolve merge conflicts and CI failures (via agent)Action Steps:
responses.json/commentreply with validated response format/commentreply - Post responses to GitHub comments/commentcheck - Verify 100% comment coverageAction Steps:
/pushl - Push changes with automated labeling/guidelines - Update PR guidelines documentationAction Steps: COPILOT_END_TIME=$(date +%s) COPILOT_DURATION=$((COPILOT_END_TIME - COPILOT_START_TIME)) if [ $COPILOT_DURATION -gt 900 ]; then echo "โ ๏ธ Performance exceeded: $((COPILOT_DURATION / 60))m $((COPILOT_DURATION % 60))s (target: 15m)" fi
### Phase 6: โก Core Workflow
**Action Steps:**
๐จ **OPTIMIZED HYBRID PATTERN**: /copilot uses direct execution + selective task agents for maximum reliability
1. **DIRECT ORCHESTRATION**: Handle comment analysis, GitHub operations, and coordination directly
2. **SELECTIVE TASK AGENTS**: Launch `copilot-fixpr` agent for file modifications in parallel
3. **PROVEN COMPONENTS**: Use only verified working components - remove broken agents
4. **PARALLEL FILE OPERATIONS**: Agent handles Edit/MultiEdit while orchestrator manages workflow
5. **Complete comment coverage** - Process ALL comments without filtering
6. **Expected time**: **5-15 minutes** with reliable hybrid coordination (realistic for comprehensive analysis)
### Phase 7: ๐ Core Workflow - Hybrid Orchestrator Pattern
**Action Steps:**
**IMPLEMENTATION**: Direct orchestration with selective task agent for file operations
**INITIAL STATUS & TIMING SETUP**: Get comprehensive status and initialize timing
```bash
### Phase 1: Analysis & Agent Launch
**Action Steps:**
**๐ฏ Direct Comment Analysis**:
Execute comment processing workflow directly for reliable GitHub operations:
1. Execute /commentfetch to gather all PR comments and issues
2. **INPUT SANITIZATION**: Validate all GitHub comment content for malicious patterns before processing
3. **API RESPONSE VALIDATION**: Verify external API responses against expected schemas and sanitize data
**๐ก๏ธ ENHANCED SECURITY IMPLEMENTATION**:
```bash
### Phase 2: Hybrid Integration & Response Generation
**Action Steps:**
**Direct orchestration with agent result integration**:
**๐ Structured Data Exchange - Agent Result Collection**:
```bash
### Phase 3: Verification & Completion (AUTOMATIC)
**Action Steps:**
**Results verified by agent coordination**:
**๐จ MANDATORY FILE JUSTIFICATION PROTOCOL COMPLIANCE**:
1. **Every file modification** must follow FILE JUSTIFICATION PROTOCOL before implementation
2. **Required documentation**: Goal, Modification, Necessity, Integration Proof for each change
3. **Integration verification**: Proof that adding to existing files was attempted first
4. **Protocol adherence**: All changes must follow NEW FILE CREATION PROTOCOL hierarchy
5. **Justification categories**: Classify each change as Essential, Enhancement, or Unnecessary
**Implementation with Protocol Enforcement**:
6. **Priority Order**: Security โ Runtime Errors โ Test Failures โ Style
7. **MANDATORY TOOLS**: Edit/MultiEdit for code changes, NOT GitHub review posting
8. **IMPLEMENTATION REQUIREMENT**: Must modify actual files to resolve issues WITH justification
9. **VERIFICATION**: Use git diff to confirm file changes made AND protocol compliance
10. **Protocol validation**: Each file change must be justified before Edit/MultiEdit usage
11. Resolve merge conflicts and dependency issues (with integration evidence)
**Final Completion Steps with Implementation Evidence**:
```bash
### Phase 11: โก **HYBRID EXECUTION OPTIMIZATION**
**Action Steps:**
1. Review the reference documentation below and execute the detailed steps.
## ๐ REFERENCE DOCUMENTATION
# /copilot - Fast PR Processing
## ๐ Table of Contents
- [Command Overview & Structure](#-command-overview--structure)
- [Purpose](#-purpose)
- [Architecture Pattern: Hybrid Orchestrator](#๏ธ-architecture-pattern-hybrid-orchestrator)
- [Three-Phase Workflow](#-three-phase-workflow)
- [Key Composed Commands Integration](#๏ธ-key-composed-commands-integration)
- [Critical Boundaries](#-critical-boundaries)
- [Performance Targets](#-performance-targets)
- [Mandatory Comment Coverage Tracking](#-mandatory-comment-coverage-tracking)
- [Automatic Timing Protocol](#๏ธ-automatic-timing-protocol)
- [Core Workflow](#-core-workflow)
- [Core Workflow - Hybrid Orchestrator Pattern](#-core-workflow---hybrid-orchestrator-pattern)
- [Phase 1: Analysis & Agent Launch](#phase-1-analysis--agent-launch)
- [Phase 2: Hybrid Integration & Response Generation](#phase-2-hybrid-integration--response-generation)
- [Phase 3: Verification & Completion](#phase-3-verification--completion-automatic)
- [Agent Boundaries](#-agent-boundaries)
- [Success Criteria](#-success-criteria)
- [Hybrid Execution Optimization](#-hybrid-execution-optimization)
## ๐ COMMAND OVERVIEW & STRUCTURE
### ๐ฏ Purpose
Ultra-fast PR processing using hybrid orchestration (direct execution + selective task agents) for comprehensive coverage and quality assurance. Targets 2-3 minute completion with 100% reliability.
### ๐๏ธ Architecture Pattern: Hybrid Orchestrator
**HYBRID DESIGN**: Direct orchestration + specialized agent for maximum reliability
- **Direct Orchestrator**: Handles comment analysis, GitHub operations, workflow coordination
- **copilot-fixpr Agent**: Specialized for file modifications, security fixes, merge conflicts
- **Proven Strategy**: Uses only verified working components, eliminates broken patterns
### ๐๏ธ Key Composed Commands Integration
- **Status Commands**: `/gstatus` (PR status), `/commentcheck` (coverage verification)
- **GitHub Commands**: `/commentfetch` (comment collection), `/commentreply` (response posting)
- **Agent Commands**: `/fixpr` (via copilot-fixpr agent for file operations)
- **Workflow Commands**: `/pushl` (automated push), `/guidelines` (documentation update)
### ๐จ Critical Boundaries
- **Orchestrator**: Comment processing, GitHub API, workflow coordination
- **Agent**: File modifications, security fixes, technical implementations
- **Never Mixed**: Agent NEVER handles comments, Orchestrator NEVER modifies files
### โก Performance Targets
- **Execution Time**: **Adaptive based on PR complexity**
- **Simple PRs** (โค3 files, โค50 lines): 2-5 minutes
- **Moderate PRs** (โค10 files, โค500 lines): 5-10 minutes
- **Complex PRs** (>10 files, >500 lines): 10-15 minutes
- **Auto-scaling timeouts** with complexity detection and appropriate warnings
- **Success Rate**: 100% reliability through proven component usage
- **Coverage**: 100% comment response rate + all actionable issues implemented
## ๐จ Mandatory Comment Coverage Tracking
This command automatically tracks comment coverage and warns about missing responses:
```bash
# COVERAGE TRACKING: Monitor comment response completion (silent unless errors)
This command silently tracks execution time and only reports if exceeded:
# Silent timing - only output if >3 minutes
COPILOT_START_TIME=$(date +%s)
## ๐ฏ Purpose
Ultra-fast PR processing using hybrid orchestration with comprehensive coverage and quality assurance. Uses hybrid orchestrator with copilot-fixpr agent by default for maximum reliability.
# CLEANUP FUNCTION: Define error recovery and cleanup mechanisms
cleanup_temp_files() {
local branch_name=$(git branch --show-current | tr -cd '[:alnum:]._-')
local temp_dir="/tmp/$branch_name"
if [ -d "$temp_dir" ]; then
echo "๐งน CLEANUP: Removing temporary files from $temp_dir"
rm -rf "$temp_dir"/* 2>/dev/null || true
fi
# Reset any stuck GitHub operations
echo "๐ CLEANUP: Resetting any stuck operations"
# Additional cleanup operations as needed
}
# ERROR HANDLER: Trap errors for graceful cleanup
trap 'cleanup_temp_files; echo "๐จ ERROR: Copilot workflow interrupted"; exit 1' ERR
# Get comprehensive PR status first
/gstatus
# Initialize timing for performance tracking (silent unless exceeded)
COPILOT_START_TIME=$(date +%s)
sanitize_comment() { local input="$1" local max_length="${2:-10000}" # Default 10KB limit
# Length validation
if [ ${#input} -gt $max_length ]; then
echo "โ Input exceeds maximum length of $max_length characters" >&2
return 1
fi
# Remove null bytes and escape shell metacharacters
local sanitized=$(echo "$input" | tr -d '\0' | sed 's/[`$\\]/\\&/g' | sed 's/[;&|]/\\&/g')
# Check for suspicious patterns in original input (before escaping)
if echo "$input" | grep -qE '(\$\(|`|<script|javascript:|eval\(|exec\()'; then
echo "โ ๏ธ Potentially malicious content detected and neutralized" >&2
# Continue with sanitized version rather than failing completely
fi
echo "$sanitized"
}
validate_branch_name() { local branch="$1" if [[ "$branch" =~ ^[a-zA-Z0-9._-]+$ ]] && [ ${#branch} -le 100 ]; then return 0 else echo "โ Invalid branch name: contains illegal characters or too long" >&2 return 1 fi }
- Analyze actionable issues and categorize by type (security, runtime, tests, style)
- Process issue responses and plan implementation strategy
- Handle all GitHub API operations directly (proven to work)
**๐ Parallel copilot-fixpr Agent Launch with Explicit Synchronization**:
Launch specialized agent for file modifications with structured coordination:
- **FIRST**: Execute `/fixpr` command to resolve merge conflicts and CI failures
- Analyze current GitHub PR status and identify potential improvements
- Review code changes for security vulnerabilities and quality issues
- Implement actual file fixes using Edit/MultiEdit tools with File Justification Protocol
- Focus on code quality, performance optimization, and technical accuracy
- **NEW**: Write completion status to structured result file for orchestrator
**๐จ EXPLICIT SYNCHRONIZATION PROTOCOL**: Eliminates race conditions
```bash
# Secure branch name and setup paths
BRANCH_NAME=$(git branch --show-current | tr -cd '[:alnum:]._-')
AGENT_STATUS="/tmp/$BRANCH_NAME/agent_status.json"
mkdir -p "/tmp/$BRANCH_NAME"
# Agent execution with status tracking
copilot-fixpr-agent > "$AGENT_STATUS" &
AGENT_PID=$!
# Detect PR complexity for appropriate timeout
FILES_CHANGED=$(git diff --name-only origin/main | wc -l)
LINES_CHANGED=$(git diff --stat origin/main | tail -1 | grep -oE '[0-9]+' | head -1 || echo 0)
if [ $FILES_CHANGED -le 3 ] && [ $LINES_CHANGED -le 50 ]; then
TIMEOUT=300 # 5 minutes for simple PRs
elif [ $FILES_CHANGED -le 10 ] && [ $LINES_CHANGED -le 500 ]; then
TIMEOUT=600 # 10 minutes for moderate PRs
else
TIMEOUT=900 # 15 minutes for complex PRs
fi
echo "๐ PR Complexity: $FILES_CHANGED files, $LINES_CHANGED lines (timeout: $((TIMEOUT/60))m)"
# Orchestrator waits for agent completion with adaptive timeout
START_TIME=$(date +%s)
while [ ! -f "$AGENT_STATUS" ]; do
# Check if agent is still running
if ! kill -0 $AGENT_PID 2>/dev/null; then
echo "โ ๏ธ Agent process terminated unexpectedly"
break
fi
CURRENT_TIME=$(date +%s)
if [ $((CURRENT_TIME - START_TIME)) -gt $TIMEOUT ]; then
echo "โ ๏ธ Agent timeout after $((TIMEOUT/60)) minutes"
kill $AGENT_PID 2>/dev/null
break
fi
sleep 10
done
# Verify agent completion before proceeding
if [ -f "$AGENT_STATUS" ]; then
echo "โ
Agent completed successfully, proceeding with response generation"
else
echo "โ CRITICAL: Agent did not complete successfully"
exit 1
fi
Coordination Protocol: Explicit synchronization prevents race conditions between orchestrator and agent
BRANCH_NAME=$(git branch --show-current | tr -cd '[:alnum:]._-') AGENT_STATUS="/tmp/$BRANCH_NAME/agent_status.json"
if [ -f "$AGENT_STATUS" ]; then # Parse structured agent results with error handling FILES_MODIFIED=$(jq -r '.files_modified[]?' "$AGENT_STATUS" 2>/dev/null | head -20 || echo "") FIXES_APPLIED=$(jq -r '.fixes_applied[]?' "$AGENT_STATUS" 2>/dev/null | head -20 || echo "") COMMIT_HASH=$(jq -r '.commit_hash?' "$AGENT_STATUS" 2>/dev/null || echo "") EXECUTION_TIME=$(jq -r '.execution_time?' "$AGENT_STATUS" 2>/dev/null || echo "0")
echo "๐ Agent Results:"
[ -n "$FILES_MODIFIED" ] && echo " Files: $FILES_MODIFIED"
[ -n "$FIXES_APPLIED" ] && echo " Fixes: $FIXES_APPLIED"
[ -n "$COMMIT_HASH" ] && echo " Commit: $COMMIT_HASH"
echo " Time: ${EXECUTION_TIME}s"
else echo "โ No agent status file found - using fallback git diff" FILES_MODIFIED=$(git diff --name-only | head -10) fi
**Agent-Orchestrator Interface**:
- **Agent provides**: Structured JSON with files_modified, fixes_applied, commit_hash, execution_time
- **Orchestrator handles**: Comment processing, response generation, GitHub API operations, coverage tracking
- **Coordination ensures**: Explicit synchronization prevents race conditions and response inconsistencies
**Response Generation with Binary Protocol** (MANDATORY ORCHESTRATOR RESPONSIBILITY):
```bash
echo "๐ Generating binary responses.json (DONE/NOT DONE only) with technical implementations"
# ๐จ MANDATORY: Implement CodeRabbit technical suggestions BEFORE response generation
echo "๐ง Implementing CodeRabbit technical suggestions:"
# 1. IMPLEMENTED: Accurate line counting with --numstat (CodeRabbit suggestion)
echo " โข Adding git diff --numstat for accurate line counting"
PR_LINES=$(git diff --numstat origin/main | awk '{added+=$1; deleted+=$2} END {print "Added:" added " Deleted:" deleted}')
echo " Lines: $PR_LINES"
# 2. IMPLEMENTED: JSON validation with jq -e (CodeRabbit suggestion)
echo " โข Adding jq -e validation for all JSON files"
for json_file in /tmp/$(git branch --show-current)/*.json; do
if [ -f "$json_file" ]; then
echo " Validating: $json_file"
jq -e . "$json_file" > /dev/null || {
echo "โ CRITICAL: Invalid JSON in $json_file"
exit 1
}
echo " โ
Valid JSON: $(basename "$json_file")"
fi
done
# 3. IMPLEMENTED: Fix agent status race condition (CodeRabbit concern)
echo " โข Implementing proper agent status coordination (not immediate file creation)"
AGENT_STATUS="/tmp/$(git branch --show-current)/agent_status.json"
# Wait for agent completion instead of immediate file creation
while [ ! -f "$AGENT_STATUS" ] || [ "$(jq -r '.status' "$AGENT_STATUS" 2>/dev/null)" != "completed" ]; do
sleep 1
echo " Waiting for agent completion..."
done
echo " โ
Agent coordination: Proper status synchronization"
# CRITICAL: Generate responses in commentreply.py expected format
# Orchestrator writes: /tmp/$(git branch --show-current)/responses.json
# ๐จ BINARY RESPONSE PROTOCOL: Every comment gets DONE or NOT DONE response only
echo "๐ BINARY PROTOCOL: Analyzing ALL comments for DONE/NOT DONE responses"
# INPUT SANITIZATION: Secure branch name validation to prevent path injection
BRANCH_NAME=$(git branch --show-current | tr -cd '[:alnum:]._-')
if [ -z "$BRANCH_NAME" ]; then
echo "โ CRITICAL: Invalid or empty branch name"
cleanup_temp_files
return 1
fi
# SECURE PATH CONSTRUCTION: Use sanitized branch name
COMMENTS_FILE="/tmp/$BRANCH_NAME/comments.json"
export RESPONSES_FILE="/tmp/$BRANCH_NAME/responses.json"
# API RESPONSE VALIDATION: Verify comment data exists and is valid JSON (using jq -e)
if [ ! -f "$COMMENTS_FILE" ]; then
echo "โ CRITICAL: No comment data from commentfetch at $COMMENTS_FILE"
cleanup_temp_files
return 1
fi
# VALIDATION: Verify comments.json is valid JSON before processing (CodeRabbit's jq -e suggestion)
if ! jq -e empty "$COMMENTS_FILE" 2>/dev/null; then
echo "โ CRITICAL: Invalid JSON in comments file"
cleanup_temp_files
return 1
fi
TOTAL_COMMENTS=$(jq '.comments | length' "$COMMENTS_FILE")
echo "๐ Processing $TOTAL_COMMENTS comments for response generation"
# Generate responses for ALL unresponded comments
# This is ORCHESTRATOR responsibility, not agent responsibility
# ๐จ NEW: MANDATORY FORMAT VALIDATION
echo "๐ง VALIDATING: Response format compatibility with commentreply.py"
# Resolve repository root for downstream tooling
PROJECT_ROOT=$(git rev-parse --show-toplevel)
if [ -z "$PROJECT_ROOT" ]; then
echo "โ CRITICAL: Unable to resolve project root"
exit 1
fi
# Use dedicated validation script for better maintainability
python3 "$PROJECT_ROOT/.claude/commands/validate_response_format.py" || {
echo "โ CRITICAL: Invalid response format";
exit 1;
}
# Verify responses.json exists and is valid before proceeding
if [ ! -f "$RESPONSES_FILE" ]; then
echo "โ CRITICAL: responses.json not found at $RESPONSES_FILE"
echo "Orchestrator must generate responses before posting"
exit 1
fi
# ๐จ BINARY RESPONSE TEMPLATE: Only DONE or NOT DONE allowed
echo "๐ Building binary response structure (DONE/NOT DONE with implementation evidence)"
cat > "$RESPONSES_FILE" << 'EOF'
{
"response_protocol": "BINARY_MANDATORY",
"allowed_responses": ["DONE", "NOT DONE"],
"template_done": "โ
DONE: [specific implementation] - File: [file:line]",
"template_not_done": "โ NOT DONE: [specific reason why not feasible]",
"implementation_evidence_required": true,
"no_other_responses_acceptable": true,
"responses": []
}
EOF
# Validate responses.json with jq -e (CodeRabbit suggestion)
jq -e . "$RESPONSES_FILE" > /dev/null || {
echo "โ CRITICAL: Invalid JSON in responses.json"
exit 1
}
echo "๐ Executing /commentreply with BINARY protocol (DONE/NOT DONE only)"
/commentreply || {
echo "๐จ CRITICAL: Comment response failed"
cleanup_temp_files
return 1
}
echo "๐ Verifying coverage via /commentcheck"
/commentcheck || {
echo "๐จ CRITICAL: Comment coverage failed"
cleanup_temp_files
return 1
}
echo "โ
Binary comment responses posted successfully (every comment: DONE or NOT DONE)"
Direct execution of /commentreply with implementation details from agent file changes for guaranteed GitHub posting
echo "๐ COPILOT EXECUTION EVIDENCE:" echo "๐ง FILES MODIFIED:" git diff --name-only | sed 's/^/ - /' echo "๐ CHANGE SUMMARY (using CodeRabbit's --numstat):" git diff --numstat origin/main echo "๐ TRADITIONAL STAT:" git diff --stat
echo "๐ IMPLEMENTATION VERIFICATION:" echo " โข Line counting: git diff --numstat - IMPLEMENTED โ " echo " โข JSON validation: jq -e validation - IMPLEMENTED โ " echo " โข Agent status coordination: proper file handling - IMPLEMENTED โ " echo " โข Binary response protocol: DONE/NOT DONE only - IMPLEMENTED โ " echo " โข Every comment response: binary DONE/NOT DONE with explanation - IMPLEMENTED โ "
/pushl || { echo "๐จ PUSH FAILED: PR not updated" echo "๐ง RECOVERY: Attempting git status check" git status cleanup_temp_files return 1 }
**Coverage Tracking (MANDATORY GATE):**
```bash
# HARD VERIFICATION GATE with RECOVERY - Must pass before proceeding
echo "๐ MANDATORY: Verifying 100% comment coverage"
if ! /commentcheck; then
echo "๐จ CRITICAL: Comment coverage failed - attempting recovery"
echo "๐ง RECOVERY: Re-running comment response workflow"
# Attempt recovery by re-running comment responses
/commentreply || {
echo "๐จ CRITICAL: Recovery failed - manual intervention required";
echo "๐ DIAGNOSTIC: Check /tmp/$(git branch --show-current | tr -cd '[:alnum:]_-')/responses.json format";
echo "๐ DIAGNOSTIC: Verify GitHub API permissions and rate limits";
exit 1;
}
# Re-verify after recovery attempt
/commentcheck || {
echo "๐จ CRITICAL: Comment coverage still failing after recovery"
cleanup_temp_files
return 1
}
fi
echo "โ
Comment coverage verification passed - proceeding with completion"
๐ฏ Adaptive Performance Tracking:
# Detect PR complexity for realistic timing expectations (if not done earlier)
if [ -z "$FILES_CHANGED" ]; then
FILES_CHANGED=$(git diff --name-only origin/main | wc -l)
LINES_CHANGED=$(git diff --stat origin/main | tail -1 | grep -oE '[0-9]+' | head -1 || echo 0)
fi
# Set complexity-based performance targets
if [ $FILES_CHANGED -le 3 ] && [ $LINES_CHANGED -le 50 ]; then
COMPLEXITY="simple"
TARGET_TIME=300 # 5 minutes
elif [ $FILES_CHANGED -le 10 ] && [ $LINES_CHANGED -le 500 ]; then
COMPLEXITY="moderate"
TARGET_TIME=600 # 10 minutes
else
COMPLEXITY="complex"
TARGET_TIME=900 # 15 minutes
fi
echo "๐ PR Complexity: $COMPLEXITY ($FILES_CHANGED files, $LINES_CHANGED lines)"
echo "๐ฏ Target time: $((TARGET_TIME / 60)) minutes"
# Calculate and report timing with complexity-appropriate targets
COPILOT_END_TIME=$(date +%s)
COPILOT_DURATION=$((COPILOT_END_TIME - COPILOT_START_TIME))
if [ $COPILOT_DURATION -gt $TARGET_TIME ]; then
echo "โ ๏ธ Performance exceeded: $((COPILOT_DURATION / 60))m $((COPILOT_DURATION % 60))s (target: $((TARGET_TIME / 60))m for $COMPLEXITY PR)"
else
echo "โ
Performance target met: $((COPILOT_DURATION / 60))m $((COPILOT_DURATION % 60))s (under $((TARGET_TIME / 60))m target)"
fi
# SUCCESS: Clean up and complete
echo "โ
COPILOT WORKFLOW COMPLETED SUCCESSFULLY"
cleanup_temp_files
/guidelines
/fixpr command to resolve merge conflicts and CI failures/fixpr command๐จ CRITICAL AGENT BOUNDARY: The copilot-fixpr agent must NEVER attempt to:
Direct Orchestrator (EXCLUSIVE RESPONSIBILITIES):
FAILURE CONDITIONS:
The orchestrator MUST generate responses.json in this exact format:
{
"responses": [
{
"comment_id": "2357534669", // STRING format required
"reply_text": "[AI responder] โ
**Issue Fixed**...",
"in_reply_to": "optional_parent_id"
}
]
}
comment_id MUST be STRING (not integer)reply_text MUST contain substantial technical responseresponses array MUST contain entry for each actionable comment/tmp/{branch_name}/responses.jsonresponses array with comment_id and reply_textstr(response_item.get("comment_id")) == comment_id[AI responder] โ
**Issue Fixed** or โ **Not Done** prefixes