The final answer will require talking to someone who is involved in the early development of Java. However, I think it is pretty clear that the bytecode format was originally designed with the naive interpreter in mind.
Consider writing very simple Java bytecode. There is no JIT, no optimization, etc. You simply follow each instruction as it arrives. Assuming that the constant pool was decoded into a table of 32-bit values at boot time, a command like ldc2_w x, referencing the constant pool, would execute C code along the lines
* (* int64) (stack_ptr + = 8) = * (* int64) (constant_pool_ptr + x * 4)
Basically, if you work on a 32-bit machine and translate everything into raw pointer calls without optimization, then using two slots for 64-bit values is just a logical way to implement things.
The reason this is a bad choice today is because the translators are currently not fully optimized. Actually, the code is usually JITed. In addition, 64-bit platforms are the norm, which means that reference types occupy 64 bits anyway *, although the specification treats them as 32-bit values. Therefore, there is no longer any benefit from this hack, but we still pay the costs for the complexity and complexity of the implementation.
Theoretically, at least. The JVM uses 32-bit compressed pointers by default, even on 64-bit platforms, to reduce memory usage. source
share