Java HashMap Collision Resolution

So, from what I read on stackoverflow and other websites. Java uses linked lists to resolve hash collisions.

This would guarantee O (n) complexity for the worst-case insert, receive, and delete scenarios.

Why doesn't Java use a self-balancing BST (like AVL, Red Black, etc.) to guarantee O (log n) complexity for worst case scenarios of insert, receive and delete?

+4
source share
2 answers

In most cases, there will be very few elements in the bucket; often zero or one. In these cases, a simple hash bucket design can guarantee O (1); O (log n) BST can shorten the time in some suboptimal cases of cross, but the performance gain is negligible at best and negative at worst. There is also significant overhead memory. Java 8 attempts to detect when the linked list is no longer optimal and converted to BST; however, if this happens frequently, it is a sign that hashes and HashMap are being used incorrectly.

When reading the source code for the JDK, there are many implementation details. Here is a brief excerpt from the top of Oracle java.util.HashMap:

/*
 * Implementation notes.
 *
 * This map usually acts as a binned (bucketed) hash table, but
 * when bins get too large, they are transformed into bins of
 * TreeNodes, each structured similarly to those in
 * java.util.TreeMap. Most methods try to use normal bins, but
 * relay to TreeNode methods when applicable (simply by checking
 * instanceof a node).  Bins of TreeNodes may be traversed and
 * used like any others, but additionally support faster lookup
 * when overpopulated. However, since the vast majority of bins in
 * normal use are not overpopulated, checking for existence of
 * tree bins may be delayed in the course of table methods.
 * [...]

HashMap # getNode HashMap.Node, , - , java.util.LinkedList, .

, , . , HashMap.TreeNode, , - BST.

+2

Java 8, .

. , - 99% . , , , , Comparable, -... .

+4

Source: https://habr.com/ru/post/1659364/


All Articles