Determine the optimal size for the array relative to the granularity of the JVM memory

When creating a support array for (for example) a collection, you do not really need the exact size of the created array, it should be at least as large as you calculated.

But thanks to the memory allocation and the header of the VM array, in some cases it would be possible to create a slightly larger array without consuming more memory - for Oracle 32-bit virtual machine (at least this is what several sources on the Internet), the memory granularity is 8 (which means that the memory allocation is rounded to the next 8-byte boundary), and the overhead of the array header is 12 bytes.

This means that when distributing the object [2] it should consume 20 bytes (12 + 2 * 4), but on granularity it will actually take 24 bytes. It would be possible to create an Object [3] for the same memory cost, that is, for the collection you need to slightly resize your array. The same principle can be applied to arrays of primitives, for example. byte [] used for I / O buffers, char [] in line builder, etc.

Although such optimization would not have a really noticeable effect, except in the most extreme circumstances, it would not be easy to call a static method to "optimize" the size of the array.

The problem is that in the JDK there is no “round array to memory dimension” size. And to write such a method, I will need to determine some important parameters for VM granularity: memory, array header headers, and finally, the size of each type (mainly a problem for links, since their size can vary depending on the architecture and parameters of the virtual machine).

So, is there a way to determine these parameters or achieve the desired “rounding” in other ways?

+5
source share
2 answers

Interesting idea. I think a more portable method for determining this is the actual measurement of usage. Program Example:

public class FindMemoryUsage { public static void main(String[] args) { for (int i=0; i<50; i+=2) { long actual = getActualUsageForN(i); System.out.println(i + " = " + actual); long theoretical = getTheoreticalUsageForN(i); if (theoretical != actual) { throw new RuntimeException("Uh oh! Mismatch!"); } } } private static long getTheoreticalUsageForN(long count) { long optimal = (Unsafe.ARRAY_BYTE_BASE_OFFSET + Unsafe.ARRAY_BYTE_INDEX_SCALE * count); return ((optimal - 1) & ~7) + 8; } private static long getActualUsageForN(int count) { System.gc(); byte[][] arrays = new byte[3000000][]; long begin = usedMemory(); for (int i=0; i<arrays.length; i++) { arrays[i] = new byte[count]; } long end = usedMemory(); return Math.round((end - begin) / (double) arrays.length); } private static long usedMemory() { return Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory(); } } 

This program gives you this information:

 0 = 16 2 = 16 4 = 16 6 = 24 8 = 24 10 = 24 12 = 24 14 = 32 16 = 32 18 = 32 20 = 32 22 = 40 24 = 40 26 = 40 28 = 40 30 = 48 32 = 48 34 = 48 36 = 48 38 = 56 40 = 56 42 = 56 44 = 56 46 = 64 48 = 64 

This data is taken from the actual usage calculation and theoretical usage based on the sun.misc.Unsafe constants and 8-byte rounding. This means that you can use these constants to "round", as you suggested:

 private static int roundSizeUp(int from) { long size = (Unsafe.ARRAY_BYTE_BASE_OFFSET + Unsafe.ARRAY_BYTE_INDEX_SCALE * from); long actual = ((size - 1) & ~7) + 8; return (int) (actual - Unsafe.ARRAY_BYTE_BASE_OFFSET) / Unsafe.ARRAY_BYTE_INDEX_SCALE; } 

This is VM-specific code, but you can probably find how to do this based on the getActualUsageForN strategy if you need more portability.

Please note that this is not a production-quality code: you should carefully consider overflowing and change Unsafe links as constants that really apply to the type of array you are working with.

+2
source

When collections with dynamic sizes increase the size of their array, they do not add a small amount to their size, they increase proportionally. Doubling is a common choice. They do this because it gives the best performance. The small adjustment you propose will not be worth the effort.

+1
source

Source: https://habr.com/ru/post/988940/


All Articles