Why hashmap resizes based on total size instead of full buckets

I have doubts in my mind ::

Currently HashMap in java resizes when totalSize(no of elements inserted) > arrayLength * loadFactor

so that he doubles the table and renames all the key values.

But suppose the hashcode in the Key class is hard-coded to say say 1, so each item will be inserted at index 1 in the linked list. But ours bucketarraywill be an unnecessary resizing to the overall size. Thus, it will continue to increase in size bucketarray, while the elements included in the same bucket, with such a hashcode implementation.

My question is, should you check the size on filled buckets instead of the total size?

I know that such a hash code will interfere with performance. I ask this as a logical question.

+4
source share
3 answers

HashMaphas some code that tries to improve bad implementations hashCode(), but it can't do anything to improve a horrible implementation hashCode()that always returns the same value.

This hashCode()will give poor performance regardless of whether you resize HashMapor not. Therefore, such poor use HashMapdoes not justify the addition of special logic, as you suggest.

hashCode() , HashMap. ( , ) , HashMap , .

+3

- 12 9 . , , #hashCode() 3 - -, , - 1. , (0, 3, 6 9) 1 .

, , . , , 75%, , 24 , 8 .

-, , -, , -. , - , , .

, , - - .

+1

if hashcodealways returns the same value

  • this is a poor implementation, there is no logic to support what should not be done.

  • hashcodemay not be a constant function, HashMap does not know if the hash function has a constant type or not, so it’s reasonable to change the size hashmap, as if a hashcodenon-constant function suddenly happens , then changing the size can lead to a better distribution of values.

0
source

Source: https://habr.com/ru/post/1688926/


All Articles