It depends on the language and compiler. unsigned int is 32 bits these days, but this is not always the rule, it depends on the language, compiler and purpose. If you want to use int on an earlier 8086 or on a 16-bit processor (microcontroller), int can be 16 bits. Double is a bit more standard. provided that the IEEE 754 floating point number is 32 bits and two 64 bits. But again, this is the language, the compiler and the target dependency.
Then any indentation between them, if they are defined as back to back, like this, also depends on the language, compiler and purpose. Assuming these are 64 and 32 bits, the compiler may not bother to plot anything, since the line is beautiful at 32-bit boundaries (the big assumption there is based on the two lines that you provided). But he can choose to place 32 strip bits so that both are 64 bit aligned.
source share