I was tasked with writing a shell script for grep through hundreds of log files in many directories on Linux and Solaris servers. Some logs are compressed in many formats, and some of them are several GB in size. I'm worried that grep uses a lot of resources on the server and possibly removes the web servers that are running on the machine due to running out of memory (if that happens).
Should I unzip the files, grep them, and then compress them again or use zgrep (or the equivalent) to search for them in compression? Would it be wise to use one method over another?
Also, is there an easy way to limit the memory usage of a command as a percentage of what is currently available?
If someone could explain how memory usage works during the execution of these commands, that would help a lot.
source
share