What are the minimum values ​​for integer types?

From ISO / IEC 9899:

7.18.1.2 Integer types of minimum width

1 typedef int_leastN_t denotes a signed integer type with a width of at least N, so that no integer type with a smaller sign has a given width. Thus, int_least32_t denotes a signed integer type with a width of at least 32 bits.

Why should I use these types?

When I decide which type I should use for the variable that I need, then I ask myself: "What will be the greatest value that it can carry?"

So, I'm going to find the answer, check that the lowest 2 n which is larger than this, and take the corresponding exact integer type.

Therefore, in this case, I could also use an integer type of minimum width. But why? As I already know: this will never be more valuable. So why take something that sometimes can cover even more, as I need?

In all other cases, I can imagine where it’s even invalid, for example:

"I have a type that will be at least ..." - Imputation cannot know what the largest (for example) user input that I will ever get will be, so setting the type at compile time will not help.

"I have a variable where I cannot determine what size values ​​will be held at run time."

-So how can the compiler know at compile time? β†’ He also cannot find the byte size.

So what is the use of these types?

+6
source share
2 answers

Because your compiler knows better what is good for you. For example, on some processor architectures, calculations using 8 or 16-bit types can be much slower than calculations performed on 32 bits due to additional instructions for masking operands and results according to their width.

C implementation on Cray Unicos, for example, has only 8-bit char type, everything else (short, int, long, long long) is 64 bits. If you force an int16_t or int32_t , performance can be severely affected due to narrow stores that require masking, processing, and processing. Using int_least32_t will allow the compiler to use its own 64-bit type.

+5
source

So why take something that can sometimes cover even more as I need?

Because there may not always be the required size. For example, on a system where CHAR_BIT > 8 , int8_t not available, but int_least8_t is.

The idea is not that the compiler will guess how many bits you need. The idea is that the compiler will always have an accessible type that will satisfy your size requirements, even if it cannot offer the exact size.

+5
source

Source: https://habr.com/ru/post/981604/


All Articles