I wrote this code:
#include <stdio.h> int main(){ printf("Size of short int: %d \n", sizeof(short)); printf("Size of int: %d \n", sizeof(int)); printf("Size of long int: %d \n", sizeof(long)); printf("Size of float: %d \n", sizeof(float)); printf("Size of double: %d \n", sizeof(double)); printf("Size of long double: %d \n", sizeof(long double)); return 0; }
If the solution was:
Size of short int: 2 Size of int: 4 Size of long int: 4 Size of float: 4 Size of double: 8 Size of long double: 12
Naturally, there are differences between integers and floating point data types, but what is the reason that any compiler allocates the same amount of memory to the same length as for int? Long was designed to handle large values, but is useless if you do the same as above (for an integer). Floating point diversity adds an additional 16 bits of distribution.
My question, in fact, is why are there long ones if there are instances of machines that do not use their capabilities?
From the book K & R:
The intent is that short and long should provide different lengths of integers where practical; int will normally be the natural size for a particular machine. short is often 16 bits long, and int either 16 or 32 bits. Each compiler is free to choose appropriate sizes for its own hardware, subject only to the the restriction that shorts and ints are at least 16 bits, longs are at least 32 bits, and short is no longer than int, which is no longer than long.
Is there a "thumb rule" if you want when the machine compiler wants to allocate more memory for longer than int? And vice versa? What are the criteria?
source share