Ensuring that subprocesses are dead when exiting a Python program

Is there a way to ensure that the entire created subprocess is dead during the exit of the Python program? By subprocess, I mean those created using the .Popen () subprocess.

If not, should I iterate over all issues and then kill -9? nothing cleaner?

+42
python subprocess kill zombie-process
Nov 26 '08 at 10:21
source share
13 answers

You can use atexit for this and register all cleanup tasks that will be performed when you exit your program.

atexit.register (func [, * args [, ** kargs]])

During the cleaning process, you can also realize your own expectation and kill it when the desired timeout occurs.

>>> import atexit >>> import sys >>> import time >>> >>> >>> >>> def cleanup(): ... timeout_sec = 5 ... for p in all_processes: # list of your processes ... p_sec = 0 ... for second in range(timeout_sec): ... if p.poll() == None: ... time.sleep(1) ... p_sec += 1 ... if p_sec >= timeout_sec: ... p.kill() # supported from python 2.6 ... print 'cleaned up!' ... >>> >>> atexit.register(cleanup) >>> >>> sys.exit() cleaned up! 

Note Registered functions will not be launched if this process (parent process) is killed.

The following Windows method is no longer needed for python> = 2.6

Here you can kill the process in the windows. Your Popen object has a pid attribute, so you can simply call it success = win_kill (p.pid) ( pywin32 required):

  def win_kill(pid): '''kill a process by specified PID in windows''' import win32api import win32con hProc = None try: hProc = win32api.OpenProcess(win32con.PROCESS_TERMINATE, 0, pid) win32api.TerminateProcess(hProc, 0) except Exception: return False finally: if hProc != None: hProc.Close() return True 
+35
Nov 26 '08 at 13:36
source share

In * nix, perhaps using process groups can help you - you can also catch subprocesses generated by your subprocesses.

 if __name__ == "__main__": os.setpgrp() # create new process group, become its leader try: # some code finally: os.killpg(0, signal.SIGKILL) # kill all processes in my group 

Another consideration is the escalation of signals: from SIGTERM (the default signal to kill ) to SIGKILL (aka kill -9 ). Wait a little while between the signals to give the process the opportunity to exit cleanly before you kill -9 it.

+27
Nov 26 '08 at 22:02
source share

subprocess.Popen.wait() is the only way to make sure they are dead. Indeed, POSIX requires you to wait for your children. Many * nix will create a zombie process: a dead child for which the parent did not expect.

If the child is well written, it ends. Often children read from PIPE. Closing the input is a big hint that the child should close the store and exit.

If the child has errors and does not complete, you may have to kill him. You must fix this error.

If the child is a "serve forever" loop and is not meant to be completed, you must either kill it or provide some input or message that will cause it to stop.




Change

In standard OS you have os.kill( PID, 9 ) . Kill -9 severely, by the way. If you can kill them with SIGABRT (6?) Or SIGTERM (15), which will be more polite.

On Windows, you do not have os.kill that works. Take a look at this ActiveState Recipe to complete the process on Windows.

We have child processes that are WSGI servers. To interrupt them, we do GET at a special URL; it makes the baby cleanse and go out.

+14
Nov 26 '08 at 10:56
source share

poll ()

Check if the child process has completed. Returns the returncode attribute.

+4
Nov 26 '08 at 10:32
source share

Warning: for Linux only! You can get your child to receive a signal when his parent dies.

First set python-prctl == 1.5.0, then change the parent code to start the child processes as follows

 subprocess.Popen(["sleep", "100"], preexec_fn=lambda: prctl.set_pdeathsig(signal.SIGKILL)) 

What does it say:

  • subprocess launch: sleep 100
  • after forking and before the subprocess is executed, the child registers for "send me SIGKILL when my parent ends."
+4
Dec 04 '14 at 1:59
source share

orip answer is useful, but it has the disadvantage that it kills your process and returns your parent's error code. I avoided this:

 class CleanChildProcesses: def __enter__(self): os.setpgrp() # create new process group, become its leader def __exit__(self, type, value, traceback): try: os.killpg(0, signal.SIGINT) # kill all processes in my group except KeyboardInterrupt: # SIGINT is delievered to this process as well as the child processes. # Ignore it so that the existing exception, if any, is returned. This # leaves us with a clean exit code if there was no exception. pass 

And then:

  with CleanChildProcesses(): # Do your work here 

Of course, you can do this with try / except / finally, but you will have to handle exceptional and non-exclusive cases separately.

+3
Jan 08 '15 at 2:17
source share

Is there a way to ensure that the entire created subprocess is dead during the exit of the Python program? By subprocess, I mean those created using the .Popen () subprocess.

You can break encapsulation and verify that all Popen processes have completed by doing

 subprocess._cleanup() print subprocess._active == [] 

If not, should I iterate over all issues and then kill -9? nothing cleaner?

You cannot guarantee that all subprocesses are dead without leaving your home and killing every survivor. But if you have this problem, perhaps because you have deeper design problems.

+2
Nov 26 '08 at 10:54
source share

I needed a slight variation of this problem (cleaning up subprocesses, but without exiting the Python program itself), and since it is not mentioned here among other answers:

 p=subprocess.Popen(your_command, preexec_fn=os.setsid) os.killpg(os.getpgid(p.pid), 15) 

setsid starts the program in a new session, thereby assigning it a new process group and its children. calling os.killpg on it in this way also will not reduce your own python process.

+2
Mar 22 '14 at 19:46
source share

I really needed to do this, but that included running remote commands. We wanted to be able to stop processes by closing the connection to the server. Also, if, for example, you are running in a python replica, you can choose to run as the foreground if you want to use Ctrl-C to exit.

 import os, signal, time class CleanChildProcesses: """ with CleanChildProcesses(): Do work here """ def __init__(self, time_to_die=5, foreground=False): self.time_to_die = time_to_die # how long to give children to die before SIGKILL self.foreground = foreground # If user wants to receive Ctrl-C self.is_foreground = False self.SIGNALS = (signal.SIGHUP, signal.SIGTERM, signal.SIGABRT, signal.SIGALRM, signal.SIGPIPE) self.is_stopped = True # only call stop once (catch signal xor exiting 'with') def _run_as_foreground(self): if not self.foreground: return False try: fd = os.open(os.ctermid(), os.O_RDWR) except OSError: # Happens if process not run from terminal (tty, pty) return False os.close(fd) return True def _signal_hdlr(self, sig, framte): self.__exit__(None, None, None) def start(self): self.is_stopped = False """ When running out of remote shell, SIGHUP is only sent to the session leader normally, the remote shell, so we need to make sure we are sent SIGHUP. This also allows us not to kill ourselves with SIGKILL. - A process group is called orphaned when the parent of every member is either in the process group or outside the session. In particular, the process group of the session leader is always orphaned. - If termination of a process causes a process group to become orphaned, and some member is stopped, then all are sent first SIGHUP and then SIGCONT. consider: prctl.set_pdeathsig(signal.SIGTERM) """ self.childpid = os.fork() # return 0 in the child branch, and the childpid in the parent branch if self.childpid == 0: try: os.setpgrp() # create new process group, become its leader os.kill(os.getpid(), signal.SIGSTOP) # child fork stops itself finally: os._exit(0) # shut down without going to __exit__ os.waitpid(self.childpid, os.WUNTRACED) # wait until child stopped after it created the process group os.setpgid(0, self.childpid) # join child group if self._run_as_foreground(): hdlr = signal.signal(signal.SIGTTOU, signal.SIG_IGN) # ignore since would cause this process to stop self.controlling_terminal = os.open(os.ctermid(), os.O_RDWR) self.orig_fore_pg = os.tcgetpgrp(self.controlling_terminal) # sends SIGTTOU to this process os.tcsetpgrp(self.controlling_terminal, self.childpid) signal.signal(signal.SIGTTOU, hdlr) self.is_foreground = True self.exit_signals = dict((s, signal.signal(s, self._signal_hdlr)) for s in self.SIGNALS) def stop(self): try: for s in self.SIGNALS: #don't get interrupted while cleaning everything up signal.signal(s, signal.SIG_IGN) self.is_stopped = True if self.is_foreground: os.tcsetpgrp(self.controlling_terminal, self.orig_fore_pg) os.close(self.controlling_terminal) self.is_foreground = False try: os.kill(self.childpid, signal.SIGCONT) except OSError: """ can occur if process finished and one of: - was reaped by another process - if parent explicitly ignored SIGCHLD signal.signal(signal.SIGCHLD, signal.SIG_IGN) - parent has the SA_NOCLDWAIT flag set """ pass os.setpgrp() # leave the child process group so I won't get signals try: os.killpg(self.childpid, signal.SIGINT) time.sleep(self.time_to_die) # let processes end gracefully os.killpg(self.childpid, signal.SIGKILL) # In case process gets stuck while dying os.waitpid(self.childpid, 0) # reap Zombie child process except OSError as e: pass finally: for s, hdlr in self.exit_signals.iteritems(): signal.signal(s, hdlr) # reset default handlers def __enter__(self): if self.is_stopped: self.start() def __exit__(self, exit_type, value, traceback): if not self.is_stopped: self.stop() 

Thanks to Malcolm Handley for the initial design. Made with python2.7 on linux.

+2
Dec 29 '16 at 2:39 on
source share

This is what I did for my posix application:

When your application exists, call the kill () method of this class: http://www.pixelbeat.org/libs/subProcess.py

Usage example here: http://code.google.com/p/fslint/source/browse/trunk/fslint-gui#608

+1
Nov 26 '08 at 11:39
source share
+1
Mar 06 2018-12-12T00:
source share

A solution for windows might be to use a win32 api job, for example. How to automatically kill child processes in Windows?

There is an existing python implementation here

https://gist.github.com/ubershmekel/119697afba2eaecc6330

+1
Feb 22 '16 at 22:17
source share

Find a solution for linux (without installing prctl):

 def _set_pdeathsig(sig=signal.SIGTERM): """help function to ensure once parent process exits, its childrent processes will automatically die """ def callable(): libc = ctypes.CDLL("libc.so.6") return libc.prctl(1, sig) return callable subprocess.Popen(your_command, preexec_fn=_set_pdeathsig(signal.SIGTERM)) 
+1
Apr 01 '17 at 3:23 on
source share



All Articles