I am looking for a check on two different but related arguments: above (A) and below (B) the first comment line is here in Q.
(A) HashMap structuring method:
a HashMap is a simple table. this is direct memory access (DMA).
The whole idea of HashMap (or hashing in general) in the first place is to use this constant access to memory for
a.) access to records by their own data content (<K, V>), and not by their location in the DMA (table index)
b.) management of a variable number of records - several records of an unspecified size and may / may not remain constant in size while using this structure.
So, the general structure in Java Hash:
table: table // use the identifier used in the HashMap
each cell of this table is a bucket .
Every tag - this is a linked list of type Entry - that is, each node of this linked list (not a linked Java / API list, but a data structure) is of type Entry , which, in turn, is <K, V>.
When a new pair enters the hash, a unique hashCode K, V> is calculated for this. This hashCode is the key to the index of this <K, V> in the table - it says which drives this & lt; K, V> will enter a hash. Note: hashCode is "normalized" through the hash () function (in the HashMap for one) to better match the current table length . indexFor () is also used to determine which bucket, that is, the table cell, K, V> will go in.
When a bucket is defined, the value of <K, V> is added to the top of the linked list in this bucket - as a result, this is the first <K, V> in this bucket, and the first record of the linked list that already exists is now the "next" entry on which indicates this recently added.
// ================================================== ================
(B) From what I see in HashMap , changing the size of the table - the hash is performed only by a decision based on the hash size and capacity, which are current and max. # entries in the entire hash.
There is no restructuring or resizing of individual bucket sizes - for example, "resize (), when the maximum # entries in the bucket exceed such" such ".
Incredibly, it is possible that a significant number of entries can be filled into the bucket, while the rest of the hash is pretty empty.
If this is the case, that is, there is no upper limit on the size of each bucket, the hash does not have constant, and linear access, theoretically, for one. It takes $ O (n) $ time to get a hash entry, where $ n $ is the total number of entries. But then this should not be.
// ================================================== ================
I donβt think I missed anything in part (A) above.
I'm not quite sure about part (B). This is a serious problem, and I want to find out how accurate this argument is.
I am looking for verification on both parts.
Thanks in advance.
// ================================================== ================
EDIT:
The maximum bucket size is fixed, i.e. the hash is restructured every time #entries in the bucket reaches its maximum, will allow it - the access time is just constant in theory and in use.
This is not a well-structured, but quick fix, and it will work just fine to ensure continued access.
The hash codes are likely to be evenly distributed across all buckets, and it is unlikely that any of the buckets will fall into the max bucket before the total hash size threshold is reached. This assumption also uses the current HashMap setting.
Also based on Peter Laurie's discussion below.