Streaming Command Output Performance

I am writing a service that should transmit the output of a executed command to both parent and journal. When the process is lengthy, the problem is that cmd.StdoutPipe gives me the final result (string).

Is it possible to give a partial conclusion of what is happening, as in the shell

 func main() { cmd := exec.Command("sh", "-c", "some long runnig task") stdout, _ := cmd.StdoutPipe() cmd.Start() scanner := bufio.NewScanner(stdout) for scanner.Scan() { m := scanner.Text() fmt.Println(m) log.Printf(m) } cmd.Wait() } 

PS Just for the output will be:

 cmd.Stdout = os.Stdout 

But in my case this is not enough.

+6
source share
1 answer

The code you posted works (with a reasonable command).

Here is a simple β€œsome lengthy job” written on the Go website so you can call and test your code:

 func main() { fmt.Println("Child started.") time.Sleep(time.Second*2) fmt.Println("Tick...") time.Sleep(time.Second*2) fmt.Println("Child ended.") } 

Compile it and name it as your team. You will see that different lines appear immediately, as written by the child process, "streaming".

Reasons why this may not work for you

Scanner returned by bufio.NewScanner() reads entire lines and only returns something if a newline character is encountered (as determined by the bufio.ScanLines() function).

If the command you execute does not print newlines, its output will not be returned immediately (only when a newline is printed, the internal buffer is full or the process ends).

Possible workarounds

Unless you have a guarantee that the child process prints newlines, but you still want to pass the result, you cannot read whole lines. One solution is to read in words or even read characters (runes). You can achieve this by setting another split function using Scanner.Split() :

 scanner := bufio.NewScanner(stdout) scanner.Split(bufio.ScanRunes) 

The bufio.ScanRunes function reads the rune input, so Scanner.Scan() will return whenever a new rune is available.

Or reading manually without Scanner (in this example byte-by-bit):

 oneByte := make([]byte, 1) for { _, err := stdout.Read(oneByte) if err != nil { break } fmt.Printf("%c", oneByte[0]) } 

Please note that the above code will read rune that a few bytes in the UTF-8 encoding are incorrect. To read several root of UTF-8 bytes, we need a larger buffer:

 oneRune := make([]byte, utf8.UTFMax) for { count, err := stdout.Read(oneRune) if err != nil { break } fmt.Printf("%s", oneRune[:count]) } 

What you need to keep in mind

Processes have standard buffers for standard output and standard error (usually a few KB). If the process writes to standard output or standard error, it goes to the appropriate buffer. If this buffer is filled, further entries are blocked (in the child process). If you have not read the standard output and standard error of the child process, your child process may freeze if the buffer is full.

Therefore, it is recommended that you always read both standard output and the error of the child process. Even if you know that a command usually does not record its standard error, if some error occurs, it will probably start sending error messages to the standard error.

Edit: As Dave C mentions, the standard output and error streams of the child process are discarded and will not cause lock / hang if not read. But still, without reading the error stream, you can skip something from the process.

+5
source

Source: https://habr.com/ru/post/988765/


All Articles