Key Lock Card in Java

I am dealing with some third-party library code, which includes creating expensive objects and caching them in Map . The existing implementation is something like

 lock.lock() try { Foo result = cache.get(key); if (result == null) { result = createFooExpensively(key); cache.put(key, result); } return result; } finally { lock.unlock(); } 

Obviously, this is not the best design when Foos for different keys can be created independently.

My current hack is to use Map Futures :

 lock.lock(); Future<Foo> future; try { future = allFutures.get(key); if (future == null) { future = executorService.submit(new Callable<Foo>() { public Foo call() { return createFooExpensively(key); } }); allFutures.put(key, future); } } finally { lock.unlock(); } try { return future.get(); } catch (InterruptedException e) { throw new MyRuntimeException(e); } catch (ExecutionException e) { throw new MyRuntimeException(e); } 

But it seems ... a bit hacked, for two reasons:

  • Work is done on an arbitrary empty thread. I would be glad to get the work done in the first thread that is trying to get this particular key, especially since it will be blocked in any case.
  • Even when the Map full, we still go through Future.get() to get the results. I expect it to be pretty cheap, but it is ugly.

I would like to replace cache with Map , which will block receipt for a given key until this key has a value, but allows others to receive from time to time. Is there such a thing? Or does anyone have a cleaner alternative to Map Futures ?

+6
source share
2 answers

Creating a lock for each key sounds enticing, but it may not be what you want, especially when the number of keys is large.

As you probably need to create a dedicated (read-write) lock for each key, this will affect the use of your memory. In addition, this fine grain can reach a point of reduced recoil for a given number of cores if the concurrency is really high.

ConcurrentHashMap is often a good enough solution in such a situation. It provides a normally complete concurrency reader (usually, readers are not blocked), and updates can be negotiated to the level of the desired concurrency level. This gives you pretty good scalability. The above code can be expressed using ConcurrentHashMap as follows:

 ConcurrentMap<Key,Foo> cache = new ConcurrentHashMap<>(); ... Foo result = cache.get(key); if (result == null) { result = createFooExpensively(key); Foo old = cache.putIfAbsent(key, result); if (old != null) { result = old; } } 

The simple use of ConcurrentHashMap has one drawback, which is that multiple threads can detect that the key is not cached, and each of them can call createFooExpensively (). As a result, some threads may drop. To avoid this, you would like to use the memoizer template mentioned in "Java concurrency in practice."

But then again, the good people at Google have already solved these problems for you in the form of CacheBuilder :

 LoadingCache<Key,Foo> cache = CacheBuilder.newBuilder(). concurrencyLevel(32). build(new CacheLoader<Key,Foo>() { public Foo load(Key key) { return createFooExpensively(key); } }); ... Foo result = cache.get(key); 
+7
source

You can use funtom-java-utils - PerKeySynchronizedExecutor .

It will create a lock for each key, but will immediately clear it when it will not be used.

He will also see the visibility of the recipient’s memory between calls with the same key and should be very fast and minimize the difference between calls from different keys.

Declare it in your class:

 final PerKeySynchronizedExecutor<KEY_CLASS> executor = new PerKeySynchronizedExecutor<>(); 

Use it:

 Foo foo = executor.execute(key, () -> createFooExpensively()); 
+1
source

Source: https://habr.com/ru/post/946110/


All Articles