Why should we use HashMap in multi-threaded environments?

Today I read about how HashMap works in java. I stumbled upon a blog and I quote directly from the blog article. I looked at this article about Stackoverflow. Still I want to know the details.

So, the answer is: Yes, there is a potential race condition while resizing the HashMap in Java, if two threads at the same time find that the HashMap now requires resizing, and both of them are trying to resize. on the process of resizing a HashMap in Java, an element in a bucket that is stored in a linked list, if migrated to a new bucket, because java HashMap does not add a new element to the tail, instead it adds a new element to the head to avoid moving the tail. If a race condition happens, you get an endless loop.

It states that since the HashMap is not thread safe when the HashMap is resized, a condition may occur. I even saw in our office projects that people make extensive use of HashMaps knows that they are not thread safe. If this is not thread safe, why should we use a HashMap then? This is simply a lack of knowledge among developers, as they may not be aware of structures like ConcurrentHashMap or some other reason. Can anyone illuminate this riddle.

+4
source share
6 answers

I can safely say that ConcurrentHashMap is a class that is ignored. Few people know about it, and not many people use it. The class offers a very reliable and fast way to synchronize the collection of cards. I read several comparisons of HashMap and ConcurrentHashMap on the Internet. Let me just say that they are completely wrong. You cannot compare them, one offers synchronized methods for accessing the map, while the other offers no synchronization.

What most of us do not notice is that although our applications, especially web applications, work fine during the development and testing phase, they are usually tilted under heavy (or even moderately heavy) load. This is because we expect our HashMaps to behave in a certain way, but under load they usually behave badly. Hashtables offer simultaneous access to their records, with a small slip of the tongue, the entire card is blocked for any operation.

Despite the fact that these overheads are uninformed in a web application under normal load, under heavy load this can lead to a delay in response time and overload your server for no good reason. Here you can execute ConcurrentHashMaps. They offer all Hashtable features with performance almost equal to HashMap. ConcurrentHashMaps do this using a very simple mechanism.

Instead of locking the card, the collection maintains a list of 16 default locks, each of which is used to protect (or block) one bucket of the card. This actually means that 16 threads can change the collection at a time (provided that they all work on different buckets). Infact does not perform the operation performed by this collection, which locks the entire card.

+8
source

There are several aspects to this: firstly, most collections are not thread safe. If you want to create a thread safe collection, you can call synchronizedCollection or synchronizedMap

But the main thing is this: you want your threads to run in parallel, no synchronization at all - if possible, of course. This is something that you should strive for, but, of course, cannot be achieved every time you engage in a multi-threaded process. But it makes no sense to create collection / display streams by default, because it should be an extreme case when the map is shared. Syncing means a lot of work for jvm.

+2
source

I did a little more research, and I would say that all the answers are good. But we can, obviously, see why the race condition happens in the HashMap. After a little research on stackoverflow, I found these links, and they stand quietly to explore the concept further.

I suppose they clarified my concept.

+1
source

In a multi-threaded environment, you need to make sure that it does not change at the same time or that you can achieve a critical memory problem because it is not synchronized in any way.

Darling, just check Api before, I think the same way too.

I thought the solution was to use the static method Collections.synchronizedMap. I expected it to return a better implementation. But if you look at the source code, you will understand that all they have is just a shell with a synchronized call on the mutex, which turns out to be the same card, not allowing reading at the same time.

There is an implementation in the Jakarta commons project called FastHashMap. This implementation has a property called fast. If fast is correct, then the read is not synchronized, and the writes will perform the following steps:

 Clone the current structure Perform the modification on the clone Replace the existing structure with the modified clone public class FastSynchronizedMap implements Map, Serializable { private final Map m; private ReentrantReadWriteLock lock = new ReentrantReadWriteLock(); . . . public V get(Object key) { lock.readLock().lock(); V value = null; try { value = m.get(key); } finally { lock.readLock().unlock(); } return value; } public V put(K key, V value) { lock.writeLock().lock(); V v = null; try { v = m.put(key, value); } finally { lock.writeLock().lock(); } return v; } . . . } 

Please note that we are doing a finally try block, we want to guarantee that the lock will be released no matter what problem occurs in the block.

This implementation works well when you almost do no write operations and mostly do read operations.

0
source

A hashmap can be used when one thread has access to it. However, when several threads start accessing the Hashmap, there will be two main problems: 1. Resizing the hashmap is not guaranteed to work as expected. 2. A parallel modification exception will be selected. It can also be thrown when its access to one stream can be read and written to the hash map at the same time.

0
source

A workaround for using HashMap in a multi-threaded environment is to initialize it with the expected number of object counts, therefore avoiding the need for recalibration.

0
source

Source: https://habr.com/ru/post/1493801/


All Articles