Imagine a simple look at proactive multitasking. We have two user tasks, both of which work all the time without using I / O or making kernel calls. These two tasks should not do anything special to work in a multi-tasking operating system. The kernel, usually based on a timer interrupt, simply decides that the time for one task pauses to start another. The task in question is completely unaware that something has happened.
However, most tasks make random kernel requests through system calls. When this happens, the same user context exists, but the CPU executes kernel code on behalf of this task.
Older Linux kernels would never have exceeded the task while it was busy running kernel code. (Note that I / O is always voluntarily rescheduled. I am talking about the case when the kernel code has some intensive work with the processor, for example, sorting a list.)
If the system allows you to perform this task when running the kernel code, then we have the so-called “proactive kernel”. Such a system is immune to unpredictable delays that can occur during system calls, so it may be better suited for embedded or real-time tasks.
For example, if there are two tasks on a particular processor, and one takes a system call, which takes 5 ms, and the other an application for an MP3 player, which must deliver an audio channel every 2 ms, you can hear the audio stutter.
The argument against the premise is that all the kernel code that can be called in the context of the task must be able to survive overcoming - for example, there is a lot of bad device driver code, which could be better if it can always complete before allow the execution of another task on this processor. (In multiprocessor systems, the rule, and not the exception these days, all kernel code must be re-enabled, so this argument is not so important today.) In addition, if the same goal can be solved by improving system calls with poor latency, perhaps no prerequisite needed.
A compromise is CONFIG_PREEMPT_VOLUNTARY, which allows you to switch the task at specific points within the kernel, but not everywhere. If there are a small number of places where kernel code can get bogged down, this is a cheap way to reduce latency, while maintaining control complexity.