How is the type and size of data stored in memory?

double a; unsigned int b; 

At run time, how will the OS know how many bytes are associated with these variables, and how should their bits be interpreted? If it depends on the language / OS, suppose C is in Windows.

Is there a LUT that maps a bit representation of a variable identifier to a byte size and data type? From UC assembly programming, I remember that the compiler magically knew how many bytes the variable was allocated, and performed the zero addition of / etc accordingly.

+4
source share
2 answers

As for the OS, these are just blocks of memory. He knows nothing about what they contain, except for "random bit strings."

SMARTS is everything to the compiler - the compiler monitors the types of variables, and then generates load and save commands for the corresponding number of bytes, and generates code that works on operands of the appropriate size and encoding the circuit (for example, it knows to use an unsigned operation instead of the one signed in unsigned int) .

+5
source

It depends on the language and compiler. unsigned int is 32 bits these days, but this is not always the rule, it depends on the language, compiler and purpose. If you want to use int on an earlier 8086 or on a 16-bit processor (microcontroller), int can be 16 bits. Double is a bit more standard. provided that the IEEE 754 floating point number is 32 bits and two 64 bits. But again, this is the language, the compiler and the target dependency.

Then any indentation between them, if they are defined as back to back, like this, also depends on the language, compiler and purpose. Assuming these are 64 and 32 bits, the compiler may not bother to plot anything, since the line is beautiful at 32-bit boundaries (the big assumption there is based on the two lines that you provided). But he can choose to place 32 strip bits so that both are 64 bit aligned.

+1
source

Source: https://habr.com/ru/post/1489386/


All Articles