Run the complete regression test suite, analyze all results (successes and failures), and manage GitHub issues accordingly.
Runs full regression test suite, analyzes results, and manages GitHub issues automatically.
/plugin marketplace add laird/agents/plugin install autofix@plugin-marketplaceRun the complete regression test suite, analyze all results (successes and failures), and manage GitHub issues accordingly.
# Run full regression test suite
/full-regression-test
This command reads configuration from CLAUDE.md:
## Automated Testing & Issue Management
### Regression Test Suite
```bash
npm test
npm run build
Unit Tests:
E2E Tests:
Test Reports:
docs/test/regression-reports/
If no configuration is found, defaults are used.
## Instructions
```bash
# Timestamp for this test run
TIMESTAMP=$(date +"%Y-%m-%d_%H-%M-%S")
# Load configuration from CLAUDE.md
if [ -f "CLAUDE.md" ]; then
echo "๐ Loading configuration from CLAUDE.md"
# Check if autofix configuration exists
if ! grep -q "## Automated Testing & Issue Management" CLAUDE.md; then
echo "โ ๏ธ No autofix configuration found in CLAUDE.md"
echo "๐ Adding autofix configuration section to CLAUDE.md..."
# Append autofix configuration to CLAUDE.md
cat >> CLAUDE.md << 'AUTOFIX_CONFIG'
## Automated Testing & Issue Management
This section configures the `/full-regression-test` command.
### Regression Test Suite
```bash
npm test
npm run build
Unit Tests:
E2E Tests:
Test Reports:
docs/test/regression-reports/AUTOFIX_CONFIG
echo "โ
Added autofix configuration to CLAUDE.md - please update with project-specific details"
fi
if grep -q "### Regression Test Suite" CLAUDE.md; then TEST_COMMAND=$(sed -n "/### Regression Test Suite/,/^###/{/^```bash$/n;p;}" CLAUDE.md | grep -v "^#" | grep -v "^```" | grep -v "^$" | head -1) echo "โ Regression test command: $TEST_COMMAND" else TEST_COMMAND="npm test" echo "โ ๏ธ No regression test command found, using default: $TEST_COMMAND" fi
if grep -q "### Build Verification" CLAUDE.md; then BUILD_COMMAND=$(sed -n "/### Build Verification/,/^###/{/^```bash$/n;p;}" CLAUDE.md | grep -v "^#" | grep -v "^```" | grep -v "^$" | head -1) echo "โ Build command: $BUILD_COMMAND" else BUILD_COMMAND="npm run build" echo "โ ๏ธ No build command found, using default: $BUILD_COMMAND" fi
if grep -q "Location:" CLAUDE.md && grep "Location:" CLAUDE.md | grep -q "regression-reports"; then
REPORT_DIR=$(grep "Location:" CLAUDE.md | sed 's/.*\([^]regression-reports[^]*\)./\1/' | head -1)
echo "โ
Report directory: $REPORT_DIR"
else
REPORT_DIR="docs/test/regression-reports"
echo "โ ๏ธ No report directory found, using default: $REPORT_DIR"
fi
else
echo "โ ๏ธ No CLAUDE.md found in project, using defaults"
TEST_COMMAND="npm test"
BUILD_COMMAND="npm run build"
REPORT_DIR="docs/test/regression-reports"
fi
mkdir -p "$REPORT_DIR"
REPORT_FILE="$REPORT_DIR/regression-${TIMESTAMP}.md"
echo "" echo "โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ" echo " Full Regression Test Suite" echo " Started: $(date)" echo "โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ" echo ""
cat > "$REPORT_FILE" << EOF
Date: $(date) Timestamp: ${TIMESTAMP}
EOF
## Step 1: Run Build Verification
```bash
echo "๐ง [1/3] Running Build Verification..."
BUILD_RESULTS="/tmp/build-results-${TIMESTAMP}.log"
if $BUILD_COMMAND 2>&1 | tee "$BUILD_RESULTS"; then
BUILD_STATUS="โ
PASSED"
BUILD_EXIT=0
echo "โ
Build successful"
else
BUILD_STATUS="โ FAILED"
BUILD_EXIT=1
echo "โ Build failed"
fi
cat >> "$REPORT_FILE" << EOF
### Build Verification
- **Status**: $BUILD_STATUS
- **Command**: \`$BUILD_COMMAND\`
EOF
echo ""
echo "๐งช [2/3] Running Unit Tests..."
UNIT_RESULTS="/tmp/unit-test-results-${TIMESTAMP}.log"
if $TEST_COMMAND 2>&1 | tee "$UNIT_RESULTS"; then
UNIT_STATUS="โ
PASSED"
UNIT_EXIT=0
else
UNIT_STATUS="โ FAILED"
UNIT_EXIT=1
fi
# Extract unit test stats (Jest format)
UNIT_PASSED=$(grep -oP 'Tests:.*?(\d+)\s+passed' "$UNIT_RESULTS" | grep -oP '\d+' | tail -1 || echo "0")
UNIT_FAILED=$(grep -oP 'Tests:.*?(\d+)\s+failed' "$UNIT_RESULTS" | grep -oP '\d+' | head -1 || echo "0")
UNIT_TOTAL=$(grep -oP 'Tests:.*?(\d+)\s+total' "$UNIT_RESULTS" | grep -oP '\d+' | tail -1 || echo "0")
echo "Unit Tests: ${UNIT_PASSED}/${UNIT_TOTAL} passed"
cat >> "$REPORT_FILE" << EOF
### Unit Tests
- **Status**: $UNIT_STATUS
- **Total**: $UNIT_TOTAL
- **Passed**: $UNIT_PASSED
- **Failed**: $UNIT_FAILED
EOF
echo ""
echo "๐ [3/3] Running E2E Tests..."
E2E_RESULTS="/tmp/e2e-test-results-${TIMESTAMP}.log"
# Check if E2E tests are configured
if grep -q "### E2E Tests Only" CLAUDE.md 2>/dev/null; then
E2E_COMMAND=$(sed -n "/### E2E Tests Only/,/^###/{/^\`\`\`bash$/n;p;}" CLAUDE.md | grep -v "^#" | grep -v "^\`\`\`" | grep -v "^$" | head -1)
echo "โ
E2E test command: $E2E_COMMAND"
if $E2E_COMMAND 2>&1 | tee "$E2E_RESULTS"; then
E2E_STATUS="โ
PASSED"
E2E_EXIT=0
else
E2E_STATUS="โ FAILED"
E2E_EXIT=1
fi
# Parse E2E results
E2E_PASSED=$(grep -oP '\d+\s+passed' "$E2E_RESULTS" | grep -oP '^\d+' || echo "0")
E2E_FAILED=$(grep -oP '\d+\s+failed' "$E2E_RESULTS" | grep -oP '^\d+' || echo "0")
E2E_TOTAL=$((E2E_PASSED + E2E_FAILED))
echo "E2E Tests: ${E2E_PASSED}/${E2E_TOTAL} passed"
cat >> "$REPORT_FILE" << EOF
### E2E Tests
- **Status**: $E2E_STATUS
- **Total**: $E2E_TOTAL
- **Passed**: $E2E_PASSED
- **Failed**: $E2E_FAILED
EOF
else
echo "โญ๏ธ E2E tests not configured, skipping"
E2E_EXIT=0
E2E_FAILED=0
fi
After running tests, analyze ALL results (successes and failures) and manage GitHub issues accordingly.
Failures are assigned priority based on impact:
echo ""
echo "๐ Analyzing test results and managing GitHub issues..."
# Fetch ALL test-failure issues (open and closed for analysis)
gh issue list --state open --label "test-failure" --json number,title,labels,body --limit 200 > /tmp/gh-open-issues.json 2>/dev/null || echo "[]" > /tmp/gh-open-issues.json
gh issue list --state all --json number,title,labels,body --limit 200 > /tmp/gh-all-issues.json 2>/dev/null || echo "[]" > /tmp/gh-all-issues.json
# Count results
TOTAL_FAILURES=$((UNIT_FAILED + E2E_FAILED))
TOTAL_PASSED=$((UNIT_PASSED + E2E_PASSED))
ISSUES_CREATED=0
ISSUES_UPDATED=0
ISSUES_CLOSED=0
cat >> "$REPORT_FILE" << EOF
---
## Test Analysis
- **Total Passed**: $TOTAL_PASSED
- **Total Failed**: $TOTAL_FAILURES
EOF
Check if any previously-failing tests now pass and close their issues:
echo ""
echo "๐ Checking for resolved issues (tests that now pass)..."
# Extract passing test names
grep -E "(PASS|โ|โ)" "$UNIT_RESULTS" 2>/dev/null | head -50 > /tmp/passing-tests.txt || true
if [ -f "$E2E_RESULTS" ]; then
grep -E "(passed|โ|โ)" "$E2E_RESULTS" 2>/dev/null >> /tmp/passing-tests.txt || true
fi
# Check each open test-failure issue to see if its test now passes
cat /tmp/gh-open-issues.json | python3 -c "
import json, sys, subprocess
issues = json.load(sys.stdin)
passing_tests = open('/tmp/passing-tests.txt', 'r').read().lower() if open('/tmp/passing-tests.txt', 'r') else ''
for issue in issues:
title = issue.get('title', '').lower()
number = issue.get('number')
# Check if this issue's test name appears in passing tests
# Extract test name from title (remove 'Test failure:' or 'E2E failure:' prefix)
test_name = title.replace('test failure:', '').replace('e2e failure:', '').strip()[:40]
if test_name and test_name in passing_tests:
print(f'{number}|{title}')
" 2>/dev/null > /tmp/resolved-issues.txt || true
# Close resolved issues
if [ -s /tmp/resolved-issues.txt ]; then
echo "Found issues to close:"
while IFS='|' read -r issue_num issue_title; do
if [ -n "$issue_num" ]; then
echo " โ
Closing #$issue_num: $issue_title"
gh issue comment "$issue_num" --body "## โ
Test Now Passing
**Date**: $(date)
**Report**: \`$REPORT_FILE\`
This test is now passing in the regression test suite.
### Test Results Summary
- Unit Tests: ${UNIT_PASSED}/${UNIT_TOTAL} passed
- E2E Tests: ${E2E_PASSED:-0}/${E2E_TOTAL:-0} passed
Closing this issue as resolved.
๐ค Auto-closed by /full-regression-test" 2>/dev/null || true
gh issue close "$issue_num" --comment "Verified fixed - test now passes" 2>/dev/null || true
ISSUES_CLOSED=$((ISSUES_CLOSED + 1))
fi
done < /tmp/resolved-issues.txt
cat >> "$REPORT_FILE" << EOF
### Resolved Issues (Closed)
EOF
while IFS='|' read -r issue_num issue_title; do
echo "- #$issue_num: $issue_title" >> "$REPORT_FILE"
done < /tmp/resolved-issues.txt
echo "" >> "$REPORT_FILE"
else
echo " No resolved issues found"
fi
echo ""
echo "๐ Processing test failures..."
if [ "$TOTAL_FAILURES" -gt 0 ]; then
echo "Found $TOTAL_FAILURES test failure(s)"
cat >> "$REPORT_FILE" << EOF
### Test Failures
Total failures: $TOTAL_FAILURES
EOF
# Process unit test failures
if [ "$UNIT_FAILED" -gt 0 ]; then
echo "Processing $UNIT_FAILED unit test failure(s)..."
# Extract failing test names and create/update issues
grep -E "(FAIL|โ|โ)" "$UNIT_RESULTS" 2>/dev/null | head -20 | while IFS= read -r line; do
TEST_NAME=$(echo "$line" | sed -E 's/.*?(FAIL|โ|โ)\s+//' | head -c 80)
if [ -n "$TEST_NAME" ]; then
# Determine priority
if echo "$TEST_NAME" | grep -qiE "auth|login|security|crash"; then
PRIORITY="P0"
elif echo "$TEST_NAME" | grep -qiE "create|delete|update|save"; then
PRIORITY="P1"
elif echo "$TEST_NAME" | grep -qiE "filter|sort|search|display"; then
PRIORITY="P2"
else
PRIORITY="P3"
fi
# Check if issue exists (search in all issues, not just open)
EXISTING=$(cat /tmp/gh-all-issues.json | python3 -c "
import json, sys
issues = json.load(sys.stdin)
test = '$TEST_NAME'.lower()[:30]
for i in issues:
if test in i.get('title','').lower():
print(i['number'])
break
" 2>/dev/null || echo "")
if [ -n "$EXISTING" ]; then
echo " ๐ Updating issue #$EXISTING: $TEST_NAME"
gh issue comment "$EXISTING" --body "## ๐ด Regression Test Failure
**Date**: $(date)
**Test**: \`$TEST_NAME\`
**Priority**: $PRIORITY
**Report**: \`$REPORT_FILE\`
### Test Results Context
- Unit Tests: ${UNIT_PASSED}/${UNIT_TOTAL} passed (${UNIT_FAILED} failed)
- Build Status: $BUILD_STATUS
This test failed during automated regression testing.
๐ค Updated by /full-regression-test" 2>/dev/null || true
# Reopen if closed
gh issue reopen "$EXISTING" 2>/dev/null || true
ISSUES_UPDATED=$((ISSUES_UPDATED + 1))
else
echo " โ Creating issue: $TEST_NAME"
gh issue create \
--title "Test failure: ${TEST_NAME:0:80}" \
--body "## Test Failure
**Test**: \`$TEST_NAME\`
**Priority**: $PRIORITY
**Detected**: $(date)
**Report**: \`$REPORT_FILE\`
### Test Results Context
- Unit Tests: ${UNIT_PASSED}/${UNIT_TOTAL} passed (${UNIT_FAILED} failed)
- Build Status: $BUILD_STATUS
This failure was detected during automated regression testing.
**Next Steps**:
1. Reproduce the failure locally
2. Identify root cause
3. Implement fix
4. Verify with \`/full-regression-test\`
๐ค Created by /full-regression-test" \
--label "bug,test-failure,$PRIORITY" 2>/dev/null || true
ISSUES_CREATED=$((ISSUES_CREATED + 1))
fi
fi
done
fi
# Process E2E test failures
if [ "$E2E_FAILED" -gt 0 ]; then
echo "Processing $E2E_FAILED E2E test failure(s)..."
grep -E "(โ|โ|FAILED)" "$E2E_RESULTS" 2>/dev/null | head -20 | while IFS= read -r line; do
TEST_NAME=$(echo "$line" | sed -E 's/.*?(โ|โ|FAILED)\s*//' | head -c 80)
if [ -n "$TEST_NAME" ]; then
# Determine priority based on test content
if echo "$TEST_NAME" | grep -qiE "auth|login|security|checkout"; then
PRIORITY="P0"
elif echo "$TEST_NAME" | grep -qiE "create|delete|submit|save"; then
PRIORITY="P1"
else
PRIORITY="P2"
fi
# Check if issue exists
EXISTING=$(cat /tmp/gh-all-issues.json | python3 -c "
import json, sys
issues = json.load(sys.stdin)
test = '$TEST_NAME'.lower()[:30]
for i in issues:
if test in i.get('title','').lower():
print(i['number'])
break
" 2>/dev/null || echo "")
if [ -n "$EXISTING" ]; then
echo " ๐ Updating issue #$EXISTING: $TEST_NAME"
gh issue comment "$EXISTING" --body "## ๐ด E2E Regression Test Failure
**Date**: $(date)
**Test**: \`$TEST_NAME\`
**Priority**: $PRIORITY
**Report**: \`$REPORT_FILE\`
### Test Results Context
- E2E Tests: ${E2E_PASSED}/${E2E_TOTAL} passed (${E2E_FAILED} failed)
- Build Status: $BUILD_STATUS
This E2E test failed during automated regression testing.
๐ค Updated by /full-regression-test" 2>/dev/null || true
# Reopen if closed
gh issue reopen "$EXISTING" 2>/dev/null || true
ISSUES_UPDATED=$((ISSUES_UPDATED + 1))
else
echo " โ Creating issue: $TEST_NAME"
gh issue create \
--title "E2E failure: ${TEST_NAME:0:80}" \
--body "## E2E Test Failure
**Test**: \`$TEST_NAME\`
**Priority**: $PRIORITY
**Detected**: $(date)
**Report**: \`$REPORT_FILE\`
### Test Results Context
- E2E Tests: ${E2E_PASSED}/${E2E_TOTAL} passed (${E2E_FAILED} failed)
- Build Status: $BUILD_STATUS
This E2E test failure was detected during automated regression testing.
**Next Steps**:
1. Reproduce the failure locally
2. Check for environment/timing issues
3. Implement fix
4. Verify with \`/full-regression-test\`
๐ค Created by /full-regression-test" \
--label "bug,test-failure,e2e,$PRIORITY" 2>/dev/null || true
ISSUES_CREATED=$((ISSUES_CREATED + 1))
fi
fi
done
fi
else
echo "โ
All tests passed!"
cat >> "$REPORT_FILE" << EOF
### Result
โ
**All tests passed!**
No test failures detected during this regression test run.
EOF
fi
echo ""
echo "๐ Issue Management Summary:"
echo " - Issues created: $ISSUES_CREATED"
echo " - Issues updated: $ISSUES_UPDATED"
echo " - Issues closed: $ISSUES_CLOSED"
echo ""
echo "โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ"
echo " Regression Test Complete"
echo "โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ"
echo ""
echo " Build: $BUILD_STATUS"
echo " Unit Tests: ${UNIT_PASSED}/${UNIT_TOTAL} passed"
if [ -n "$E2E_TOTAL" ] && [ "$E2E_TOTAL" -gt 0 ]; then
echo " E2E Tests: ${E2E_PASSED}/${E2E_TOTAL} passed"
fi
echo ""
echo " Report: $REPORT_FILE"
echo ""
# Finalize report
cat >> "$REPORT_FILE" << EOF
---
## Summary
| Category | Status |
|----------|--------|
| Build | $BUILD_STATUS |
| Unit Tests | ${UNIT_PASSED}/${UNIT_TOTAL} passed |
| E2E Tests | ${E2E_PASSED:-N/A}/${E2E_TOTAL:-N/A} passed |
### GitHub Issue Management
| Action | Count |
|--------|-------|
| Issues Created | $ISSUES_CREATED |
| Issues Updated | $ISSUES_UPDATED |
| Issues Closed (Resolved) | $ISSUES_CLOSED |
---
๐ค Generated by /full-regression-test at $(date)
EOF
# Set exit code
if [ "$BUILD_EXIT" -ne 0 ] || [ "$UNIT_EXIT" -ne 0 ] || [ "$E2E_EXIT" -ne 0 ]; then
echo "โ ๏ธ Some tests failed. Check GitHub issues for details."
exit 1
else
echo "โ
All tests passed!"
exit 0
fi
After running, this command produces:
This command is automatically invoked by /fix-github when no priority bug issues exist. After running:
/fix-github picks them up/fix-github moves to enhancement phaseYou can also run this command standalone to check project health.
๐ค Ready to run full regression test suite.