Should I use the smallest type?

Once upon a time, I remember that you should always use the smallest possible type to store your data, but almost every piece of code that I read does not. They often use 32-bit integers.

I heard the rationale that a 32-bit value is selected as fast as an 8-bit value, but processors have some way of getting a few lower values ​​at once. Correctly?

So, if I use 4 bytes instead of 4 integers, should the compiler not have to optimize this so that 4 bytes are extracted / stored in one 32-bit register?

Or is all this really just a premature optimization and a potential increase in performance is negligible?

+3
source share
3 answers

Premature optimization really! However, once you optimize, it also depends on your architecture. For example, in ARM, memory access should be 32-bit (some instructions can do this, but they just do 32-bit access, then mask / slide backstage). If you use a byte, the compiler often gives each byte four actual bytes of RAM so that it can be accessed faster (not to mention that the CPU will work if you try to access unaligned bytes without special code to process them) .

There is an argument for using "int" for everything, since it prefers the size of the processor, but basically just uses the type of the required size and allows the compiler to worry about optimization: D

+4

. , . , , , 8- , . .

+3

32- 8- 32- , . , , .. , 4 ( 32 8- ) .

Assuming we're talking about C or C ++, the optimizing compiler tends to make the right decisions for you, but you can explicitly control this behavior if you need to do your own packing into structures, etc.

However, there are other BEST reasons to use a type that matches the domain of your data: clarity, maintainability, etc. I think that this optimization trump cards in 99% of cases.

+1
source

Source: https://habr.com/ru/post/1794449/


All Articles