Optimal CPU utilization thresholds

I have software that I deploy to a Windows 2003 server. The software runs as a service continuously, and this is the only application on the Windows box that matters to me. Partially, it extracts data from the Internet and partially performs some calculations on this data. It is multithreaded - I use thread pools from about 4-20 threads.

I will not bore you with all these details, but suffice it to say that when I include more threads in the pool, more parallel work occurs, and the CPU usage increases. (as well as the demand for other resources, such as bandwidth, although this does not concern me - I have a lot)

My question is, should I just try to maximize the processor to get the best bang for my buck? Intuitively, I don’t think it makes sense to work on a 100% processor; even 95% of the processor seems high, almost the same as I do not give the OS a lot of space to do what is needed. I don’t know how to find the best balance. I suppose I can measure and measure and probably find that the best throughput is achieved by using a central processor of 90% or 91%, etc., But ...

I'm just wondering if there is a good rule about this ??? I don't want to assume that my testing will take into account all kinds of workload variations. I would rather play it a bit safe, but not too safe (otherwise I will use my equipment).

What do you recommend? What is a reasonable performance-oriented usage rule for multi-threaded mixed loading (some I / O applications, some processors) on Windows?

+4
source share
5 answers

Yes, I suggest that 100% break, so I would not want the processes working this way all the time. I have always sought 80% to get a balance between use and a place for bursts / special processes.

The approach I used in the past is to slowly increase the size of the pool and measure the impact (both on the CPU and on other restrictions such as IO), you never know, you may find that suddenly IO becomes narrow the place.

+2
source

CPU load should not matter in this intensive i / o workload, you care about bandwidth, so try using the hill raising approach and basically try to inject / remove workflows programmatically and track progress ...

If you add a stream and this helps, add another one. If you try a thread, it hurts to remove it.

It eventually stabilizes.

If this is a .NET-based application, hill climbing has been added to the .NET 4 stream.

UPDATE:

Hill climbing is a management theory-based approach to maximizing throughput, you can call it trial and error if you want, but this is the right approach. In general, there is no good β€œrule of thumb”, because overhead costs and delays change so much that it is impossible to generalize. The focus should be on throughput and task / thread completion, rather than CPU utilization. For example, it is fairly easy to bind kernels quite easily with coarse or fine-grained synchronization, but does not actually affect throughput.

Also regarding .NET 4, if you can remake your problem as Parallel.For or Parallel.ForEach, then threadpool will adjust the number of threads for maximum throughput, so you don’t have to worry about that.

-Rick

+4
source

Assuming nothing is more important, but the OS runs on a machine:

And your workload is constant, you should strive for 100% CPU utilization, and everything else is a waste of the processor. Remember that the operating system processes threads, so it can really start, it is difficult for it to starve with a well-managed program.

But if your load is variable and you expect peaks that you should take into account, I would say that an 80% processor is a good threshold to use if you do not know exactly how this load will change and how much it will require a processor, in which case you can aim for the exact number.

+3
source

If you just give your threads a low priority, the OS will do the rest and take care of the cycles, because it is necessary for work. Server 2003 (and most server operating systems) is very good at this , no need to try to manage it yourself.

+1
source

I also used 80% as a general rule for using the target processor. As several others have noted, this leaves some margin for sporadic bursts of activity and helps to avoid threshing on the processor.

The following is a small (older, but still relevant) advice from the Weblogic team on this issue: http://docs.oracle.com/cd/E13222_01/wls/docs92/perform/basics.html#wp1132942

If you feel that your workload is very even and predictable, you can push this goal a little higher, but if your user base is not exceptionally tolerant of periodic slow responses and your project budget is incredibly tight, I would recommend adding more resources for your system (adding a processor, using a processor with a large number of cores, etc.), because you made a risky move to try to squeeze another 10% of the processor load from your existing platform.

0
source

Source: https://habr.com/ru/post/1301900/


All Articles