You donβt need to use awk if all you want to do is. :) Also, writing to a file when you read it in the form in which you did it will result in data loss or corruption , try not to of this.
for file in *.php ; do # or, to do this to all php files recursively: # find . -name '*.php' | while read file ; do # make backup copy; do not overwrite backup if backup already exists test -f $file.orig || cp -p $file $file.orig # awk '{... print > NEWFILE}' NEWFILE="$file" "$file.orig" sed -e "s:include('\./:include(':g" "$file.orig" >"$file" done
Just to clarify the aspect of data loss: when awk (or sed ) starts processing the file, and you ask them to read the first line, they will actually perform buffered reads, that is, they will read from the file system (let it simplify and say "from disk") a data block the size of its internal read buffer (for example, 4-65 KB) to improve performance (by reducing disk I / O). Suppose a file re working with larger than the size of the buffer. Further readings will come from the buffer until the buffer is exhausted, after which the second block of data will be loaded from the disk into the buffer, etc.
However, immediately after reading the first line, i.e. after the first block of data is read from the disk into the buffer, your awk script opens FILENAME , the input file itself for writing with truncation , i.e. file size on disk reset to 0 . For now, all that remains of your source file is the first few kilobytes of data in awk memory. awk will continue to read line after line from the buffer in memory and produce output until the buffer is exhausted, at which point awk will probably stop and leave you with a 4-65k file.
As a side note, if you are actually using awk to expand (for example, print "PREFIX: " $0 ), do not shorten ( gsub(/.../, "") ) the data, then you will almost certainly get an immutable awk and ever-growing file. :)
source share