I tried to understand that the hashmap is intercepted when the number of occupied buckets or the total number of records in all buckets is exceeded. So, we know that if 12 of 16 (one record in each bucket) buckets are full (considering the standard loadfactor and initial capacity), then we know that the hashmap will be rephrased in the next record. But what about this case, when it is assumed that only 3 buckets are occupied by 4 inputs each (a total of 12 entries, but only 3 buckets of 16 in use)?
So, I tried to replicate this by making the worst hash function that put all the records in one bucket.
Here is my code.
class X { public Integer value; public X(Integer value) { super(); this.value = value; } @Override public int hashCode() { return 1; } @Override public boolean equals(Object obj) { X a = (X) obj; if(this.value.equals(a.value)) { return true; } return false; } }
Now I started putting values ββin hashmap.
HashMap<X, Integer> map = new HashMap<>(); map.put(new X(1), 1); map.put(new X(2), 2); map.put(new X(3), 3); map.put(new X(4), 4); map.put(new X(5), 5); map.put(new X(6), 6); map.put(new X(7), 7); map.put(new X(8), 8); map.put(new X(9), 9); map.put(new X(10), 10); map.put(new X(11), 11); map.put(new X(12), 12); map.put(new X(13), 13); System.out.println(map.size());
All nodes entered one bucket, as expected, but I noticed that on the 9th record the hash map repeated and doubled its capacity. Now on the 10th record, he again doubled his capabilities.

Can someone explain how this happens?
Thanks in advance.