From ISO / IEC 9899:
7.18.1.2 Integer types of minimum width
1 typedef int_leastN_t denotes a signed integer type with a width of at least N, so that no integer type with a smaller sign has a given width. Thus, int_least32_t denotes a signed integer type with a width of at least 32 bits.
Why should I use these types?
When I decide which type I should use for the variable that I need, then I ask myself: "What will be the greatest value that it can carry?"
So, I'm going to find the answer, check that the lowest 2 n which is larger than this, and take the corresponding exact integer type.
Therefore, in this case, I could also use an integer type of minimum width. But why? As I already know: this will never be more valuable. So why take something that sometimes can cover even more, as I need?
In all other cases, I can imagine where itβs even invalid, for example:
"I have a type that will be at least ..." - Imputation cannot know what the largest (for example) user input that I will ever get will be, so setting the type at compile time will not help.
"I have a variable where I cannot determine what size values ββwill be held at run time."
-So how can the compiler know at compile time? β He also cannot find the byte size.
So what is the use of these types?
dhein source share