When creating a support array for (for example) a collection, you do not really need the exact size of the created array, it should be at least as large as you calculated.
But thanks to the memory allocation and the header of the VM array, in some cases it would be possible to create a slightly larger array without consuming more memory - for Oracle 32-bit virtual machine (at least this is what several sources on the Internet), the memory granularity is 8 (which means that the memory allocation is rounded to the next 8-byte boundary), and the overhead of the array header is 12 bytes.
This means that when distributing the object [2] it should consume 20 bytes (12 + 2 * 4), but on granularity it will actually take 24 bytes. It would be possible to create an Object [3] for the same memory cost, that is, for the collection you need to slightly resize your array. The same principle can be applied to arrays of primitives, for example. byte [] used for I / O buffers, char [] in line builder, etc.
Although such optimization would not have a really noticeable effect, except in the most extreme circumstances, it would not be easy to call a static method to "optimize" the size of the array.
The problem is that in the JDK there is no “round array to memory dimension” size. And to write such a method, I will need to determine some important parameters for VM granularity: memory, array header headers, and finally, the size of each type (mainly a problem for links, since their size can vary depending on the architecture and parameters of the virtual machine).
So, is there a way to determine these parameters or achieve the desired “rounding” in other ways?
source share