Signal implementation under Linux and Windows?

I am not new to using signals in programming. I mainly work in C / C ++ and Python. But I am interested to know how signals are actually implemented in Linux (or Windows).

Does the OS check after each CPU instruction in the signal descriptor table if there are any registered signals left to process? Or the process manager / scheduler responsible for this?

As the signal is asynchronous, is it true that the CPU command is aborted before it is completed?

+4
source share
3 answers

The OS definitely does not process every instruction. Never. Too slow.

When the CPU encounters a problem (for example, division by 0, access to a limited resource or memory location that is not backed up by physical memory), it generates a special kind of interrupt called an exception (not to be confused with C ++ / Java / etc).

The OS handles these exceptions. If so desired and, if possible, it may reflect the exception in the process from which it arose. The so-called Structured Exception Handling (SEH) on Windows is a reflection. C signals must be implemented using the same mechanism.

+6
source

On systems that I'm familiar with (although I donโ€™t understand why it should be so much different elsewhere), the signal is delivered when the process returns from kernel mode to user mode.

First, consider one case of cpu. There are three signal sources:

  • the process sends a signal to itself
  • another process sends a signal
  • interrupt handler (network, disk, usb, etc.) causes a signal to be sent

In all these cases, the target process is not executed in userland, but in kernel mode. Either through a system call, or through a context switch (since another process could not send a signal if our target process is not running) or through an interrupt handler. Thus, signal delivery is a simple matter of checking for any signals immediately before returning to user space from kernel mode.

In the case of multiple processors, if the target process is running on another processor, it is simply a matter of sending an interrupt to the processor with which it is working. An interrupt does nothing but force another processor to switch to kernel mode and vice versa so that signal processing can be performed on the way back.

+4
source

A process may send a signal to another process. a process can register its own signal processor for signal processing. SIGKILL and SIGSTOP are two signals that cannot be captured.

When a process executes a signal handler, it blocks the same signal. This means that when the signal handler is in progress, if another one of the same signals arrives, it will not call the signal handler [called signal blocking], but it makes a note that the signal has reached [ie: waiting signal]. after the handler of an already working signal is executed, the waiting signal is processed. If you do not want to trigger a waiting signal, you can IGNORE the signal.

The problem with the above concept is this:

Suppose the following: process A has a registered signal handler for SIGUSR1.

1) process A gets signal SIGUSR1, and executes signalhandler() 2) process A gets SIGUSR1, 3) process A gets SIGUSR1, 4) process A gets SIGUSR1, 

When step (2) is performed, it is done as a โ€œpending signalโ€. Those.; it needs to be serviced. And when step (3) occurs, it is simply ignored, since there is only one bit available to indicate the waiting signal for each available signal.

To avoid such a problem, for example: if we do not want to lose signals, then we can use real-time signals.

2) Signals are executed synchronously,

Eg.

  1) process is executing in the middle of signal handler for SIGUSR1, 2) Now, it gets another signal SIGUSR2, 3) It stops the SIGUSR1, and continues with SIGUSR2, and once it is done with SIGUSR2, then it continues with SIGUSR1. 

3) IMHO, that I remember about checking for the presence of any signal in the process:

 1) When context switch happens. 

Hope this helps to some extent.

+2
source

Source: https://habr.com/ru/post/1437154/


All Articles