The single most expensive operation in shell scripts is forking. Any operation associated with the plug, such as command substitution, will be 1-3 orders of magnitude slower than one that does not.
For example, here we propose a direct approach for a loop that reads a bunch of generated files in the form file-1234 and cuts out the file- prefix with sed , requiring a total of three forks (command substitution + two-stage pipeline):
$ time printf "file-%s\n" {1..10000} | while read line; do n=$(echo "$line" | sed -e "s/.*-//"); done real 0m46.847s
Here's a loop that does the same with parameter expansion without requiring forks:
$ time printf "file-%s\n" {1..10000} | while read line; do n=${line
A forked version takes 300 times.
So the answer to your question is yes: if efficiency matters, you have a solid justification for factoring or replacing forky code.
When the fork metric is constant relative to the input (or it's too dirty to make it constant) and the code is still too slow, this is when you have to rewrite it to a faster language.
source share