Creating a lock for each key sounds enticing, but it may not be what you want, especially when the number of keys is large.
As you probably need to create a dedicated (read-write) lock for each key, this will affect the use of your memory. In addition, this fine grain can reach a point of reduced recoil for a given number of cores if the concurrency is really high.
ConcurrentHashMap is often a good enough solution in such a situation. It provides a normally complete concurrency reader (usually, readers are not blocked), and updates can be negotiated to the level of the desired concurrency level. This gives you pretty good scalability. The above code can be expressed using ConcurrentHashMap as follows:
ConcurrentMap<Key,Foo> cache = new ConcurrentHashMap<>(); ... Foo result = cache.get(key); if (result == null) { result = createFooExpensively(key); Foo old = cache.putIfAbsent(key, result); if (old != null) { result = old; } }
The simple use of ConcurrentHashMap has one drawback, which is that multiple threads can detect that the key is not cached, and each of them can call createFooExpensively (). As a result, some threads may drop. To avoid this, you would like to use the memoizer template mentioned in "Java concurrency in practice."
But then again, the good people at Google have already solved these problems for you in the form of CacheBuilder :
LoadingCache<Key,Foo> cache = CacheBuilder.newBuilder(). concurrencyLevel(32). build(new CacheLoader<Key,Foo>() { public Foo load(Key key) { return createFooExpensively(key); } }); ... Foo result = cache.get(key);
source share