Desktop marking is a fine art. What you are describing is physically impossible, the <= and <statements simply generate different processor instructions that execute at the same speed. I changed your program a bit by doing DoIt ten times and dropping two zeros from the for () loop, so I would not have to wait forever:
x86 jitter:
Less Than Equal To Method Time Elapsed: 0.5 Less Than Method Time Elapsed: 0.42 Less Than Equal To Method Time Elapsed: 0.36 Less Than Method Time Elapsed: 0.46 Less Than Equal To Method Time Elapsed: 0.4 Less Than Method Time Elapsed: 0.34 Less Than Equal To Method Time Elapsed: 0.33 Less Than Method Time Elapsed: 0.35 Less Than Equal To Method Time Elapsed: 0.35 Less Than Method Time Elapsed: 0.32 Less Than Equal To Method Time Elapsed: 0.32 Less Than Method Time Elapsed: 0.32 Less Than Equal To Method Time Elapsed: 0.34 Less Than Method Time Elapsed: 0.32 Less Than Equal To Method Time Elapsed: 0.32 Less Than Method Time Elapsed: 0.31 Less Than Equal To Method Time Elapsed: 0.34 Less Than Method Time Elapsed: 0.32 Less Than Equal To Method Time Elapsed: 0.31 Less Than Method Time Elapsed: 0.32
x64 jitter:
Less Than Equal To Method Time Elapsed: 0.44 Less Than Method Time Elapsed: 0.4 Less Than Equal To Method Time Elapsed: 0.44 Less Than Method Time Elapsed: 0.45 Less Than Equal To Method Time Elapsed: 0.36 Less Than Method Time Elapsed: 0.35 Less Than Equal To Method Time Elapsed: 0.38 Less Than Method Time Elapsed: 0.34 Less Than Equal To Method Time Elapsed: 0.33 Less Than Method Time Elapsed: 0.34 Less Than Equal To Method Time Elapsed: 0.34 Less Than Method Time Elapsed: 0.32 Less Than Equal To Method Time Elapsed: 0.32 Less Than Method Time Elapsed: 0.35 Less Than Equal To Method Time Elapsed: 0.32 Less Than Method Time Elapsed: 0.42 Less Than Equal To Method Time Elapsed: 0.32 Less Than Method Time Elapsed: 0.31 Less Than Equal To Method Time Elapsed: 0.32 Less Than Method Time Elapsed: 0.35
The only real signal you get from this is the slow execution of the first DoIt (), also visible in the test results, which leads to wear. And the most important signal is noisy . The average value for both loops is approximately equal, the standard deviation is quite large.
Otherwise, the type of signal that you always get during micro-optimization, code execution is not very deterministic. Reducing .NET overhead, which is usually easy to fix, your program is not the only one that runs on your computer. It should split the processor, only the WriteLine () call is already affected. Run by the conhost.exe process, run simultaneously with your test, while your test code has entered the following for () loop. And everything else that happens on your computer, kernel code and interrupt handlers also get a turn.
And codegen can play a role, one thing you have to do, for example, is simply exchange two calls. The processor itself as a whole executes the code in a very non-deterministic way. The state of processor caches and the amount of historical data collected by branch prediction logic are important.
When I test, I find that a difference of 15% or less is not statistically significant. The hunt for differences is less than quite difficult, you need to carefully study the machine code. Stupid things like a branch target, misaligned, or a variable that is not stored in the processor register can cause big effects at runtime. Not that you can ever fix it; jitter doesn't have enough knobs to adjust.