How to use the optimization of multi-threaded graphics?
I have a project (less game, more application) that uses dozens of cameras in the scene, and each rendering at a time. (Don't ask why this should be so!)
Needless to say, when you run the application, it almost maximizes one CPU core, since the rendering pipeline is single-threaded. As a result, it reduces the frame rate. (GPU, memory, etc. Significantly lower than 100% of the load - the CPU is the bottleneck here)
I was very pleased to see that Unity has the “Graphics Jobs (experimental)” option, which is specifically designed to split the rendering into multiple threads and, as such, across multiple cores.
However, if this option is enabled and DirectX12 is installed at the top of the list of graphical APIs, I would expect that the processor can now be fully used, namely all CPU cores can be involved in active rendering. However, it seems that the application still uses only about 40% of my processor potential, providing a low frame rate. Of course, it should only reduce the frame rate as soon as it exceeds the CPU? It seems that individual kernels are not used in different ways. Why can't I maximize the output frame rate using almost 100% of all the cores on my processor (in total, including any additional running programs)?
In fact, I would like to know how I can force Unity to use all my processor cores in full to get the highest frame rate possible, having so many cameras on my stage at the same time ..... I thought Graphics Jobs would solve is it? ... If I use them incorrectly or do not use the correct combination of settings?
Aside, my processor is i7-4790 @ 3.6ghz, and my GPU is 980Ti using DX12. 32 GB of RAM.