I have a python application that (in parallel) spawns subprocesses (bash scripts basically). Some scripts may call other scripts. I am trying to develop the best way to handle extreme cases of an application for an application and subprocesses.
If the application must exit or receive SIGTERM , then it must terminate ( SIGTERM , wait , SIGKILL ) all subprocesses and any processes that they created. An approach for this would be to start as a new process group and kill the process group as part of the termination ( killpg ).
If any of the subprocesses takes longer than a certain time, I would like to kill them and the child processes that they created. An example here is installing the application as the leader of a process group so that I can just kill the group and rely on it to kill any other subprocesses.
The hard bit is that these two solutions conflict with each other, and therefore I seem to be able to satisfy only one requirement.
So, the last thought is to use tcsetpgrp, but I'm not too familiar with it. So, something like simulating an interactive terminal. This would mean that killing the application sends SIGHUP (I think) to all processes, and I can use process groups to control too slow subprocesses.
Is this a good idea, or are there any other suggestions that I am missing?
Bonus section: If an application was killed with SIGKILL (it is necessary from time to time in this application, yes, I know that SIGKILL should be avoided, etc.), It would be surprising that subprocesses were killed in the same way as bash sends SIGHUP to its processes when it exits.
source share