What is faster and what is not is that it is becoming increasingly difficult to predict every day. The reason is that processors are no more βsimpleβ and with all the complex dynamics and the algorithms behind them, the final speed may follow rules that completely contradict intuition.
The only way out is to simply measure and decide. Also note that what is faster depends on small parts, and even for compatible processors, for optimization, it may be pessimization for another. For very important parts, some software simply tries and checks the timings for different approaches at runtime during program initialization.
However, as a rule, the faster integer you can get is int
. You should use other integers only if you need them (for example, if long
longer and you need higher precision, or if short
less, but enough, and you need to save memory).
Better yet, if you need a specific size, use a fixed standard type or add a typedef
instead of just sprinkling long
where you need it. Thus, it will be easier to support different compilers and architectures, and it will also be clear who will read the code in the future.
source share