Comparing the creation of a large ArrayList with intialCapacity, I found it to be slower than creting without it. Here is a simple program that I wrote to measure it:
long start2 = System.nanoTime();
List<Double> col = new ArrayList<>(30000000);
for (int i = 0; i < 30000000; i++) {
col.add(Math.sqrt(i + 1));
}
long end2 = System.nanoTime();
System.out.println(end2 - start2);
System.out.println(col.get(12411325).hashCode() == System.nanoTime());
Average result for new ArrayList<>(30000000):6121173329
Average result for new ArrayList<>():4883894100
on my car. I thought it would be faster to create a large array, rather than repeating it as soon as we go beyond the capabilities of the current base array ArrayList. In the end, we would have to have the size of the array larger or equal 30000000.
I thought it was optimization, but actual pessimization. Why?
source
share