Termination, multiple nested subprocesses

I have a python application that (in parallel) spawns subprocesses (bash scripts basically). Some scripts may call other scripts. I am trying to develop the best way to handle extreme cases of an application for an application and subprocesses.

If the application must exit or receive SIGTERM , then it must terminate ( SIGTERM , wait , SIGKILL ) all subprocesses and any processes that they created. An approach for this would be to start as a new process group and kill the process group as part of the termination ( killpg ).

If any of the subprocesses takes longer than a certain time, I would like to kill them and the child processes that they created. An example here is installing the application as the leader of a process group so that I can just kill the group and rely on it to kill any other subprocesses.

The hard bit is that these two solutions conflict with each other, and therefore I seem to be able to satisfy only one requirement.

So, the last thought is to use tcsetpgrp, but I'm not too familiar with it. So, something like simulating an interactive terminal. This would mean that killing the application sends SIGHUP (I think) to all processes, and I can use process groups to control too slow subprocesses.

Is this a good idea, or are there any other suggestions that I am missing?

Bonus section: If an application was killed with SIGKILL (it is necessary from time to time in this application, yes, I know that SIGKILL should be avoided, etc.), It would be surprising that subprocesses were killed in the same way as bash sends SIGHUP to its processes when it exits.

+4
source share
1 answer

One opportunity to make your scripts self-signed.

Perl has a design in which you can set an alarm.

A good example of eslewhere on this site is here:

 /questions/214276/perl-make-script-timeout-after-x-number-of-seconds 

Perl, do script timeout after x seconds?

There are similar hits looking for a python script alarm timeout.

 /questions/1668/using-module-subprocess-with-timeout 

Using a module subprocess with a timeout

This has a side effect (advantage or mistake ...) that as long as the child process has a shorter timeout than the parent, then the parent can restore the grace.

It is better, however, to limit the process to processor time rather than wall time. Thus, the distant descendant does not gain time for the parent, and if the system as a whole is slow due to many processes, you do not get a deadly plague, regardless of your subprocesses.

You can do this in bash scripts by typing

 ulimit -t X 

where x is the number of cpu seconds you want. Please note, however, that on most systems this is a one-way street. The process cannot increase its own limit.

0
source

Source: https://habr.com/ru/post/1445684/


All Articles