The DLL files downloaded from Microsoft are intended for the general x86 architecture for the simple reason that it must work across a multitude of machines.
Until the time of the creation of Visual Studio 6.0 (I donβt know if it has changed), Microsoft used to optimize its DLLs for size, not speed. This is due to the fact that reducing the overall size of the DLL gave better performance than any other optimization that the compiler could generate. This is due to the fact that the acceleration from micro-optimization will be clearly low compared to the acceleration from the lack of processor expectations for memory. True speed improvements are associated with reduced I / O or an improvement in the underlying algorithm.
Only a few critical cycles that run at the core of a program can benefit from microoptimization simply because of the sheer number of exits. Only about 5-10% of your code can fall into this category. You can be sure that such critical loops will already be optimized in assembler by Microsoft software engineers to a certain level and will not leave the compiler behind. (I know this is too much, but I hope they do it)
As you can see, there would only be flaws in the larger DLL code, which includes additional versions of the code that are configured for different architectures, when most of this code is rarely used / is never part of the critical code that consumes most of your loops the processor.
source share