I have software that I deploy to a Windows 2003 server. The software runs as a service continuously, and this is the only application on the Windows box that matters to me. Partially, it extracts data from the Internet and partially performs some calculations on this data. It is multithreaded - I use thread pools from about 4-20 threads.
I will not bore you with all these details, but suffice it to say that when I include more threads in the pool, more parallel work occurs, and the CPU usage increases. (as well as the demand for other resources, such as bandwidth, although this does not concern me - I have a lot)
My question is, should I just try to maximize the processor to get the best bang for my buck? Intuitively, I donβt think it makes sense to work on a 100% processor; even 95% of the processor seems high, almost the same as I do not give the OS a lot of space to do what is needed. I donβt know how to find the best balance. I suppose I can measure and measure and probably find that the best throughput is achieved by using a central processor of 90% or 91%, etc., But ...
I'm just wondering if there is a good rule about this ??? I don't want to assume that my testing will take into account all kinds of workload variations. I would rather play it a bit safe, but not too safe (otherwise I will use my equipment).
What do you recommend? What is a reasonable performance-oriented usage rule for multi-threaded mixed loading (some I / O applications, some processors) on Windows?
source share