Optimize bash scripts for performance, reducing execution time and resource usage
Optimizes bash scripts for performance, reducing execution time and resource usage.
/plugin marketplace add JosiahSiegel/claude-plugin-marketplace/plugin install bash-master@claude-plugin-marketplace<script.sh> [target: speed|memory|both]MANDATORY: Always Use Backslashes on Windows for File Paths
When using Edit or Write tools on Windows, you MUST use backslashes (\) in file paths, NOT forward slashes (/).
Analyze and optimize bash scripts for maximum performance, reducing execution time and resource usage while maintaining correctness and readability.
Before:
# Spawns subshell for each
result=$(echo "$var" | tr 'a-z' 'A-Z')
count=$(cat file.txt | wc -l)
basename=$(basename "$path")
After:
# Pure bash - no subshells
result="${var^^}"
count=0; while IFS= read -r _; do ((count++)); done < file.txt
basename="${path##*/}"
| External | Bash Equivalent |
|---|---|
basename "$p" | "${p##*/}" |
dirname "$p" | "${p%/*}" |
echo "$s" | tr a-z A-Z | "${s^^}" |
expr $a + $b | $((a + b)) |
cat file | < file or mapfile |
cut -d: -f1 | "${var%%:*}" |
sed 's/a/b/g' | "${var//a/b}" |
Before:
# Creates subshell, variables lost
cat file.txt | while read -r line; do
((count++))
done
echo "$count" # Always 0!
After:
# No subshell, variables preserved
while IFS= read -r line; do
((count++))
done < file.txt
echo "$count" # Correct value
Before:
# Slow: string manipulation
files="file1.txt file2.txt file3.txt"
for f in $files; do
process "$f"
done
After:
# Fast: array iteration
files=("file1.txt" "file2.txt" "file3.txt")
for f in "${files[@]}"; do
process "$f"
done
Before:
# Multiple reads of same file
grep "pattern1" file.txt > result1.txt
grep "pattern2" file.txt > result2.txt
grep "pattern3" file.txt > result3.txt
After:
# Single pass with multiple outputs
while IFS= read -r line; do
[[ "$line" == *pattern1* ]] && echo "$line" >> result1.txt
[[ "$line" == *pattern2* ]] && echo "$line" >> result2.txt
[[ "$line" == *pattern3* ]] && echo "$line" >> result3.txt
done < file.txt
Before:
# Sequential processing
for file in *.txt; do
process_file "$file"
done
After:
# Parallel with GNU Parallel
parallel -j "$(nproc)" process_file ::: *.txt
# Or with xargs
printf '%s\0' *.txt | xargs -0 -P "$(nproc)" -I {} process_file {}
# Or with job control
max_jobs=4
for file in *.txt; do
process_file "$file" &
((++running >= max_jobs)) && wait -n && ((running--))
done
wait
Before:
# Temporary files
sort file1.txt > /tmp/sorted1
sort file2.txt > /tmp/sorted2
diff /tmp/sorted1 /tmp/sorted2
rm /tmp/sorted1 /tmp/sorted2
After:
# Process substitution - no temp files
diff <(sort file1.txt) <(sort file2.txt)
Before:
echo "$var" | command
After:
command <<< "$var"
When optimizing a script:
# Time execution
time ./script.sh
# Detailed timing with bash
TIMEFORMAT='real: %R, user: %U, sys: %S'
time ./script.sh
# Line-by-line profiling
PS4='+ $(date +%s.%N) ${BASH_SOURCE}:${LINENO}: '
set -x
./script.sh
set +x
# Trace with timestamps
bash -x script.sh 2>&1 | ts -s '%.s'
General optimization:
/bash-optimize deploy.sh
Focus on speed:
/bash-optimize process.sh target: speed
Focus on memory:
/bash-optimize large-file-handler.sh target: memory
Both speed and memory:
/bash-optimize etl-script.sh target: both
## Optimization Report: <script>
### Performance Profile
- Original execution time: X.XXs
- Bottlenecks identified: N
### Optimizations Applied
1. [IMPACT: HIGH] Replace `cat | grep` with direct grep
- Before: 150ms
- After: 20ms
- Improvement: 87%
2. [IMPACT: MEDIUM] Use bash string manipulation
- Replaced: external `basename`
- With: parameter expansion
### Summary
- Total optimizations: N
- Estimated speedup: X.Xx faster
- Subshells eliminated: N
- External commands replaced: N
### Optimized Script
[Full optimized script or diff]
I will:
Transform slow bash scripts into high-performance automation using proven optimization techniques.