Force flushing output to file while bash script is still running

I have a small script that is called daily by crontab using the following command:

/homedir/MyScript &> some_log.log 

The problem with this method is that some_log.log is only created after MyScript completes. I would like to clear the program output to a file at run time so that I can do something like

 tail -f some_log.log 

and track progress, etc.

+69
file bash flush
Sep 15 '09 at 22:26
source share
13 answers

bash itself will never write any output to your log file. Instead, the commands that it invokes as part of the script will each write output separately and clear it whenever they want. So your question is how to make the commands inside the bash script reset, and that depends on what they are.

+27
Sep 15 '09 at 22:33
source share

I found a solution to this here . Using the OP example, you basically run

stdbuf -oL/homedir/MyScript &> some_log.log

and then the buffer is flushed after each line of output. I often combine this with nohup to run long jobs on a remote machine.

stdbuf -oL nohup/homedir/MyScript &> some_log.log

Thus, your process is not canceled when you log out.

+66
Jun 15 '15 at 12:27
source share

script -c <PROGRAM> -f OUTPUT.txt

The key is -f. Quote from man script:

  -f, --flush Flush output after each write. This is nice for telecooperation: one person does `mkfifo foo; script -f foo', and another can supervise real-time what is being done using `cat foo'. 

Running in the background:

 nohup script -c <PROGRAM> -f OUTPUT.txt 
+22
Jan 31 '14 at 18:59
source share

You can use tee to write to a file without the need for cleaning.

 /homedir/MyScript 2>&1 | tee some_log.log > /dev/null 
+9
Apr 13 '13 at
source share

This is not a bash function, since the whole shell has an open file, and then passes the file descriptor as standard script output. What you need to do is make sure that the output turns red from your script more often than you do now.

In Perl, for example, this can be done by setting:

 $| = 1; 

See perlvar for more details.

+3
Sep 15 '09 at 22:32
source share

Would that help?

 tail -f access.log | stdbuf -oL cut -d ' ' -f1 | uniq 

This will immediately display unique entries from access.log
http://www.pixelbeat.org/programming/stdio_buffering/stdbuf-man.html

+2
Dec 23 '10 at 17:18
source share

Output buffering depends on how your /homedir/MyScript program is /homedir/MyScript . If you find that the output is buffered, you should force it into your implementation. For example, use sys.stdout.flush () if it is a python program or use fflush (stdout) if it is a C program.

+2
Apr 09 '17 at 13:21
source share

As soon as you notice here , the problem is that you need to wait for the programs that you run from your script to finish working.
If in a script you run the program in the background , you can try something else.

In general, calling sync before exiting flushes the file system buffers and may help a bit.

If in a script you run some programs in the background ( & ), you can wait for them to finish before exiting the script. To have an idea of ​​how it can function, you can see below

 #!/bin/bash #... some stuffs ... program_1 & # here you start a program 1 in background PID_PROGRAM_1=${!} # here you remember its PID #... some other stuffs ... program_2 & # here you start a program 2 in background wait ${!} # You wait it finish not really useful here #... some other stuffs ... daemon_1 & # We will not wait it will finish program_3 & # here you start a program 1 in background PID_PROGRAM_3=${!} # here you remember its PID #... last other stuffs ... sync wait $PID_PROGRAM_1 wait $PID_PROGRAM_3 # program 2 is just ended # ... 

Since wait works with jobs as well as PID numbers, a lazy solution should be to end the script

 for job in `jobs -p` do wait $job done 

A more complicated situation is if you run something that starts something else in the background, because you have to look and wait (if so) for the end of the entire child process: for example, if you run daemon , you probably shouldn't wait completion :-).

Note:

  • wait $ {!} means "wait for the last background process to complete", where $! is the PID of the last background process. Therefore, to put wait ${!} Immediately after program_2 & , it is equivalent to directly executing program_2 without sending it in the background using &

  • Using wait :

     Syntax wait [n ...] Key n A process ID or a job specification 
+1
Jun 26 '14 at 6:37
source share

Thanks @user3258569 , the script is probably the only thing that works in busybox !

However, the shell froze for me after it. Looking for a reason, I found that these big red warnings are "not used in non-interactive shells" on the script's manual page :

script is primarily intended for interactive terminal sessions. When stdin is not a terminal (for example: echo foo | script ), then the session may hang because the interactive shell in the script session skips EOF and the script has no idea when to close the session. See the NOTES section for more information.

Truth. script -c "make_hay" -f/dev/null | grep "needle" script -c "make_hay" -f/dev/null | grep "needle" script -c "make_hay" -f/dev/null | grep "needle" script -c "make_hay" -f/dev/null | grep "needle" froze for me.

Unlike the warning, I thought echo "make_hay" | script echo "make_hay" | script echo "make_hay" | script echo "make_hay" | script will go through EOF, so I tried

 echo "make_hay; exit" | script -f /dev/null | grep 'needle' 

and it worked!

Pay attention to the warnings on the manual page. This may not work for you.

+1
Jul 06 '18 at 23:51
source share

an alternative to stdbuf is awk '{print} END {fflush()}' I would like bash to be built in for this. Usually this is not necessary, but older versions may have bash sync errors in file descriptors.

0
Jan 25 '19 at 2:42
source share

I don't know if this will work, but what about calling sync ?

-2
Sep 15 '09 at 22:41
source share

I had this issue with the background process on Mac OS X using StartupItems . Here is how I solve it:

If I create sudo ps aux , I see mytool starting mytool .

I found that (due to buffering) when Mac OS X shuts down, mytool never passes the output to sed . However, if I run sudo killall mytool , then mytool transfers the output to the sed command. Therefore, I added a stop case to StartupItems , which runs when Mac OS X shuts down:

 start) if [ -x /sw/sbin/mytool ]; then # run the daemon ConsoleMessage "Starting mytool" (mytool | sed .... >> myfile.txt) & fi ;; stop) ConsoleMessage "Killing mytool" killall mytool ;; 
-2
May 29 '10 at 20:41
source share

like it or not, redirection works like that.

In your case, the output (i.e. your script is complete) of your script is redirected to this file.

What you want to do is add these redirects to the script.

-3
Sep 15 '09 at 22:48
source share



All Articles