The output of the multiprocessing log. Process

Is there a way to register stdout output from a given process using a multiprocessor process. Process class in python?

+49
python concurrency logging multiprocessing
01 Oct '09 at 2:44
source share
3 answers

The easiest way is to simply override sys.stdout . Slightly modifying the example from the multiprocessing manual :

 from multiprocessing import Process import os import sys def info(title): print title print 'module name:', __name__ print 'parent process:', os.getppid() print 'process id:', os.getpid() def f(name): sys.stdout = open(str(os.getpid()) + ".out", "w") info('function f') print 'hello', name if __name__ == '__main__': p = Process(target=f, args=('bob',)) p.start() q = Process(target=f, args=('fred',)) q.start() p.join() q.join() 

And by running it:

 $ ls
 m.py
 $ python m.py
 $ ls
 27493.out 27494.out m.py
 $ cat 27493.out 
 function f
 module name: __main__
 parent process: 27492
 process id: 27493
 hello bob
 $ cat 27494.out 
 function f
 module name: __main__
 parent process: 27492
 process id: 27494
 hello fred

+40
01 Oct '09 at 3:30
source share

You can set sys.stdout = Logger() , where Logger is a class whose write method (either immediately or accumulates before \n is logging.info ) calls logging.info (or any other way you want to log in). An example of this in action.

I'm not sure what you mean by a β€œspecific” process (who gave it that distinguishes it from all the others ...?), But if you mean that you know which process you want to allocate in this way the time that you you create it, then you can wrap its target function (and only) - or the run method that you override in the Process subclass - into a shell that performs this sys.stdout "redirection" - and leave the other processes alone.

Perhaps if you muffle the specs a bit, can I help in more detail ...?

+10
01 Oct '09 at 3:29
source share

There are only two things that I would add to @Mark Rushakoff's answer. When debugging, it was very useful for me to change the buffering parameter of my open() calls to 0.

 sys.stdout = open(str(os.getpid()) + ".out", "a", buffering=0) 

Otherwise, madness , because when tail -f outputs the output file, the results may be erratic. buffering=0 for tail -f excellent.

And for completeness, do yourself a favor and redirect sys.stderr .

 sys.stderr = open(str(os.getpid()) + "_error.out", "a", buffering=0) 

In addition, for convenience, you can dump it into a separate process class, if you want,

 class MyProc(Process): def run(self): # Define the logging in run(), MyProc entry function when it is .start()-ed # p = MyProc() # p.start() self.initialize_logging() print 'Now output is captured.' # Now do stuff... def initialize_logging(self): sys.stdout = open(str(os.getpid()) + ".out", "a", buffering=0) sys.stderr = open(str(os.getpid()) + "_error.out", "a", buffering=0) print 'stdout initialized' 

Makes sense

+9
May 29 '14 at 15:51
source share



All Articles