Java HashMap uses hash tables whose size is two. If you use the stop / module operation as a compression function, as usual, you end up taking the low-order bits of the hash code as the bucket index. If the hash codes are multiple of a power of two, some of the least significant bits will always be zero, and you end up using the fraction of available buckets.
A specific example: suppose you have 32 buckets and hash codes are multiples of 8. The table uses only the 5 least significant bits of the code, and 3 of them are always equal to 0. Therefore, only 2 bits define the bucket and you use only 4 of 32 buckets:
XXXXXX00000 -> bucket 0 XXXXXX01000 -> bucket 8 XXXXXX10000 -> bucket 16 XXXXXX11000 -> bucket 24
Fortunately, this is not the case in Java, because the HashMap does not use only the least significant bits of the hash code: it scrambles the bits, so it is not so easy to create bad scripts by accident. Here is an excerpt from the OpenJDK HashMap implementation:
/** * Applies a supplemental hash function to a given hashCode, which * defends against poor quality hash functions. This is critical * because HashMap uses power-of-two length hash tables, that * otherwise encounter collisions for hashCodes that do not differ * in lower bits. Note: Null keys always map to hash 0, thus index 0. */ static int hash(int h) { // This function ensures that hashCodes that differ only by // constant multiples at each bit position have a bounded // number of collisions (approximately 8 at default load factor). h ^= (h >>> 20) ^ (h >>> 12); return h ^ (h >>> 7) ^ (h >>> 4); }
source share