Java Concurrency: Are get (Key) for HashMap and ConcurrentHashMap equal in performance?

The get(Key) method is called for the standard HashMap and ConcurrentHashMap equal performance if no changes are made to the underlying map (therefore, only get () operations are performed.)

Update with background:

Concurrency is a pretty complicated topic: I do nead "concurrency / threadsafety", but only on paths that happen extremely rarely. And for puts, I could change the map associations themselves (which is atomic and thread safe). Therefore, I ask that I make many attempts (and I can either implement it using a HashMap (create a temporary Hashmap, copy data to a new HashMap or swap exchange) or use ConcurrentHashMap ... How is my application really Enough more about how performance gets lost with both of these different tricks.Oddly enough, there’s so much unnecessary information on the Internet, but this is what, in my opinion, can be of interest to a lot more people. Someone knows the internal workings of ConcurrentHashMap to receive It would be great to answer the question.

Thank you very much!

+4
source share
3 answers

You can see the source code. (I'm looking at JDK 6) HashMap.get () is pretty simple:

 public V get(Object key) { if (key == null) return getForNullKey(); int hash = hash(key.hashCode()); for (Entry<K,V> e = table[indexFor(hash, table.length)]; e != null; e = e.next) { Object k; if (e.hash == hash && ((k = e.key) == key || key.equals(k))) return e.value; } return null; } 

Where hash () makes some additional changes and XORing to "improve" your hash code.

ConcurrentHashMap.get () is a bit more complicated, but not much

 public V get(Object key) { int hash = hash(key.hashCode()); return segmentFor(hash).get(key, hash); } 

Again, hash () performs some shifts and XORing. setMentFor (int hash) does a simple array search. The only tricky one in Segment.get (). But even this is not like rocket science:

 V get(Object key, int hash) { if (count != 0) { // read-volatile HashEntry<K,V> e = getFirst(hash); while (e != null) { if (e.hash == hash && key.equals(e.key)) { V v = e.value; if (v != null) return v; return readValueUnderLock(e); // recheck } e = e.next; } } return null; } 

The only place it gets the lock is readValueUnderLock (). The comments say that it is technically legal for the memory model, but was never known.

Overall, the code seems to be very similar to both. Quite a bit better organized in ConcurrentHashMap. Therefore, I would suggest that the performance is pretty similar.

However, if it is really very rare, you might consider using a copy-to-write mechanism.

+2
source

You are asking the wrong question.

If you need concurrency, you need it regardless of performance impact.

A program of correct behavior almost always takes a higher speed than a faster program. I say "almost always" because there may be commercial reasons for releasing software with errors, rather than deterrence until the errors are fixed.

+3
source

According to the ConcurrentHashMap API, there is no lock for search methods. Therefore, I would say that they are equal in performance.

+2
source

Source: https://habr.com/ru/post/1393572/


All Articles