I am a bit confused. In the first iterations of fill cycles, I see some regression during fill when using initial capacity for ArrayList vs without using initial capacity.
According to common sense and this question: Why start an ArrayList with initial throughput?
it should be absolutely back.
This is not a well-written test test , and I am wondering: why does it always consume much more time and processor during the first iteration when using the initial capacity for ArrayList ?
This is a test:
public class TestListGen { public static final int TEST = 100_000_000; public static void main(String[] args) { test(false); } private static void test(boolean withInitCapacity) { System.out.println("Init with capacity? " + withInitCapacity); for (int i = 0; i < 5; i++) av += fillAndTest(TEST, withInitCapacity ? new ArrayList<Integer>(TEST) : new ArrayList<Integer>()); System.out.println("Average: " + (av / 5)); } private static long fillAndTest(int capacity, List<Integer> list) { long time1 = System.nanoTime(); for (int i = 0; i < capacity; i++) list.add(i); long delta = System.nanoTime() - time1; System.out.println(delta); return delta; } }
Conclusion: 1)
Init with capacity? false 17571882469 12179868327 18460127904 5894883202 13223941250 Average: 13466140630
2)
Init with capacity? true 37271627087 16341545990 19973801769 4888093008 2442179779 Average: 16183449526
I tested it: JDK 1.7.0.40 , JDK 1.8.0.31
source share