Why should I use multiple threads for one processing task, if I can increase the priority of the program?

I used to ask about processing a data stream, and someone suggested putting the data in a queue and processing this data on another AED. If it was slow, I should use multiple threads.

However, I am using a single core system. So my question is: why not add the prio of my application, so it gets more CPU time from the OS? I am writing a server application and it will only be a lot of work.

What will be pro and con for installing prio? :)

+4
source share
5 answers

If you have only one core, then the only way multithreading can help you is that chunks of this work depend on something other than the processor, so one thread can do some work while the other is waiting for data from the disk or network.

If your application has a graphical interface, then it can benefit from multithreading in that although it would not be faster to execute processing (actually slower, although probably careless, so if the task is very long), it can still Respond to user input in the meantime.

If you have two or more cores, you can also use processor-bound operations, although this can vary from trivial to impossible, depending on what kind of operation it is. This is not relevant to your case, but it is worth considering that if the code you write can later be run on a multi-core system.

Raising priority is probably a bad idea, though, especially if you have only one core (one advantage of multi-core systems is that people whose priorities cannot do so much damage).

All topics have priorities, which are a factor in the priority of their process and their priority in this process. The low priority stream in the high priority process is superior to the high priority stream in the low priority process.

The scheduler distributes processor fragments in a circular fashion with the highest priority threads that should work. If there are processors left (which in your case means that there are zero threads with this priority that need to be started), then it separates the slices to the next low priority, etc.

In most cases, most threads do not do much, as can be seen from the fact that in most cases CPU usage in most systems is below 100% (hyperthreading negates this, internal planning within the cores means that the hyperthreading system can be completely saturated and apparently only works 70%). In any case, as a rule, things are done, and a stream that suddenly has many possibilities will do this at normal priority almost at the same time as at a higher level.

However, although the advantage for this busy stream with a higher priority is usually little or nothing, the decrement is great. Since this is the only thread that receives any processor time, all other threads are stuck. Therefore, all other processes freeze for a while. Ultimately, the planner notices that they all waited about 3 seconds, and corrects this, increasing them all to the highest priority and providing them with larger fragments than usual. Now we have a burst of activity, since threads that have not received time are all of a sudden the highest priority threads that require processor time. There, the distribution of each stream, except for the high-priority one, starts, and the system stops boiling, although there are probably many more applications in their headers showing "Do not respond". This is far from ideal, but it is an effective way to cope with a thread with a higher than usual priority, capturing the kernel for so long.

Streams are gradually reduced in priority order, and as a result, we return to the situation when the only thread with a higher priority is the only one that can work.

For added pleasure, if our high-priority thread in any way depends on the services provided by the lower-priority threads, it would be waiting for them. Hopefully this way that he blocked and stopped himself from any damage, but probably not.

In general, thread priorities should be applied with great care and process priorities. They are valid only if they exit quickly and are either important for the work of other threads (for example, some OS processes will be executed with a higher priority, the finalization of threads in .NET will be higher than the rest of the process, etc. .), or if delays in milliseconds can go wrong (this requires intensive work in the media).

+2
source

If you have several cores / processors in your system, increasing the priority of one multithreaded program will not improve your performance, because other cores will not be used anyway.

The only way to use multiple processors is to write your program using multiple threads / processes.

Having said that, setting a multi-threaded application to a very high priority can lead to some performance improvements, but I have never seen it significant, at least in my own tests.

Edit : now I see that you are using only one core. Basically, your program will run more often on the processor than other processes with a lower priority. This may bring you a slight improvement, but not dramatic. Since we cannot know which applications are running on your system at the same time, the golden rule here is to try yourself with different priority levels and see what happens. This is the only valid way to find out if things will be faster or not.

+1
source

It all depends on why data processing is slow.

If data processing is slow, because it is an intensive operation with a high processor intensity, then dividing it into several threads in one main system will not do you any good. In this case, increasing the priority of the task will provide some benefit if we assume that the processor (user) time is used by other processes.

However, if the data processing operation is slow due to some non-cpu restriction (for example, if it is associated with I / O or relies on another process), then:

  • Increasing the priority of a task will have little effect. The priority of the task does not affect the I / O time, and if there is a dependence on another process in the system, you can harm the performance.

  • Dividing the data processing into several threads can allow intensive processing of processor areas in order to continue processing, waiting for the completion of areas with high processor intensity (for example, input / output).

+1
source

Increasing the priority of a single-threaded process simply gives you more (or large) fragments of time on the same core in which the process runs. The kernel can only do one thing at a time.

If you disable a thread for data processing, it can run on a different processor core (assuming a multi-core system), and it and your main thread are actually running at the same time. Much more effective.

0
source

If you use only one thread, the server application will be able to serve only one request at a time, regardless of its priority. If you use multiple threads, you can serve many at the same time.

0
source

Source: https://habr.com/ru/post/1391547/


All Articles