Why is the size of any type of data dependent on the machine?

we know that it depends on the machine or the compiler .. but why and how?

+3
source share
6 answers

To begin with, a machine’s architecture, say, 16 bits or 64 bits, will determine how wide the address bus is, and which effectively determines how much memory you can get without resorting to tricks such as virtual memory.

Typically, hardware registers inside the CPU will have the same bit width as most of the rest of the architecture; therefore, on a 32-bit CPU, it will usually be most efficient to load and store data in 32-bit chunks. In some environments, it is even necessary that all your data be aligned at 32-bit boundaries, i.e. You cannot capture data from a memory address whose byte address is not divisible by 4.

You can get around all of these software limitations, but for the most efficient programs, you want your compiler data types to be closely related to system hardware.

+2
source

, . , 32- 64-, - . , .

, , int8, int16, int32, int64 Matlab .NET .

+1

, , ..

, C, ( ) . , C/++ long 4 (32 ) 32- 8 (64- ) 64- .

0

32- . 64, - 8. , .

0

MS.net byte SQL tinyint 8 ..

0

.

. ,.NET , int 32 , 16, long 64.

C ++ . , int , long , int. .

. ( ) 32- , 32- 16- , .

64- 64- , 32. 16- , , .

, C ++ , CPU . int ++, , , , , .

0

Source: https://habr.com/ru/post/1722155/


All Articles