In lower languages, the sizes of primitive data types are often inferred from the ability of the CPU to process them. For example, in c , a int is defined as “at least 16 bits in size,” but its size may vary by architecture to ensure that “the int type must be the integer type that is most efficient for the target processor”. ( source ). This means that if your code makes careless assumptions about the size of int , it can break down very much if you port it from 32-bit x86 to 64-bit powerpc .
java , as noted above, is different. int , for example, will always be 32 bits. This means that you don’t have to worry about resizing it when you run the same code in a different architecture. The trade-off, as also mentioned above, is performance — on any architecture that does not require 32-bit processing, these int need to be deployed to its own size, which the processor can handle (which will have a small penalty), or worse, if the processor can only handle smaller int s, each operation in int may require several CPU operations.
source share