What to do with unnecessary threads from an external process?

When I execute a command in a separate process, for example, using the Runtime.getRuntime().exec(...) method, whose JavaDoc states:

 Executes the specified command and arguments in a separate process. 

What do I need to do with threads from this process, knowing that the process should work until a Java program exists? (this is a detail, but the Java program will take care of killing this process, and the process itself has built-in protection, where it kills itself if it notices that the Java program that spawned it no longer works).

If we consider that this process does not output at all (for example, because all error messages and stdout are redirected to / dev / null, and all communications are made using files / sockets / something else), what should I do with input stream?

Should I have one (or two?) Java threads running in vain while trying to read stdout / stderr?

What is the right way to handle a long-lived external process spawned from a Java program that does not create stdout / stderr at all?

EDIT

I basically wrap the shell of the script in another shell of the script, which will necessarily redirect everything to / dev / null. I'm sure my Un * x will be incompatible if my shell "outter" script (one redirects everything to / dev / null) still generates something on stdout or stderr. However, I find it reasonable that I would somehow suggest that threads execute during the application’s life-cycle “for nothing.” It really affects the mind.

+1
source share
2 answers

If everything is as you say, perhaps you can ignore them.

However, rarely everything happens so clean. Perhaps in the end it’s worth creating a single thread to pull out stdout / stderr just in case. One fine day, he fails and actually gives out something, this is the day when you needed to know what happened. 1 or 2 threads (I think this can be done with only one) will not be a big overhead. Especially if you are right and nothing comes out of these flows.

+1
source

I believe that the correct way to handle the input and output of a process, if you are not interested in them, is to quickly close them. If subsequently the child process tried to cause a read or write to stdin or stdout, an IOException will be thrown. Responsibility for the fact that he cannot read or write is not responsible for the process of the child.

Most processes ignore the fact that they cannot write and silently drop and record. This is true in Java, where System.out is a PrintWriter, so any IOExceptions given by stdout are ignored. This is pretty much what happens when you redirect output to / dev / null - all output is silently discarded.

It looks like you read the process API and why is it important to read / write to the process if it expects it to write or read by itself. But I repeat, the problem is that some operating systems simply allocate very limited buffers for (in particular) stdout, so it’s important to either prevent the buffers from filling up. This means either a quick read of any output file of the child process, or notification to the OS that you do not require the output of the process, and that it can free all stored resources, and reject any further attempt to write to stdout or read from stdin (rather than just hanging up to those until resources become available).

+1
source

Source: https://habr.com/ru/post/885475/


All Articles