Multithreaded object -> object cache map in Java?

I need a collection in Java that:

  • maps arbitrary Object to Object (not String or limited keys only)
  • will be used as a cache; if the key is not in the cache, the value is calculated (this does not need to be embedded in the collection).
  • will be available from multiple threads simultaneously
  • items will never be deleted from it
  • should be very readable (cache); itโ€™s not necessary to write efficiently (skip cache)

This is normal if the cache skips simultaneously in multiple threads, causing redundant calculations; a typical case is that the cache is mostly populated with a single thread.

A synchronized block around an insecure hash table does not meet read performance criteria. Local cache streams would be simple, but would mean that new streams are expensive, as they have full copies of the cache.

Java 1.5 built-in modules or one or more class files are preferred, which we can copy to our MIT licensed project, rather than large external libraries.

+4
source share
4 answers

Use java parallel hash file

 ConcurrentHashMap<object, object> table; public object getFromCache(object key) { value = table.get(key); if (value == null) { //key isn't a key into this table, ie. it not in the cache value = calculateValueForKey(key) object fromCache = table.putIfAbsent(key, value); } return value; } /** * This calculates a new value to put into the cache */ public abstract object calculateValueForKey(object key); 

Nb This is no longer a general solution for multi-threaded caching, since it is based on the stated fact that objects are immutable, and therefore the equivalence of objects does not matter.

+7
source

This is my own idea for a solution, but I'm not an expert on thread-based programming, so please comment / vote / compare with other answers as you think.

Use a local thread variable (java.lang.ThreadLocal), which contains a hash table for the thread, used as a first level cache. If the key is not found in this table, synchronized access is performed to the second-level cache, which is a hash of the synchronized -access file that is common to all threads. Thus, the calculation of the cache value is performed only once, and it is distributed among all threads, but each thread has a local copy of the mapping from keys to values, so there is some memory cost (but less than having independent caches for each thread), but reading is effective.

+2
source

How about something like this SingletonCache class from one of my projects?

 public abstract class SingletonCache<K, V> { private final ConcurrentMap<K, V> cache = new ConcurrentHashMap<K, V>(); public V get(K key) { V value = cache.get(key); if (value == null) { cache.putIfAbsent(key, newInstance(key)); value = cache.get(key); } return value; } protected abstract V newInstance(K key); } 

To use it, you extend it and implement the newInstance method, which creates a new value in the event of a cache miss. Then call the get method with the key to get the instance corresponding to that key. Here is an example of how it is used.

This class ensures that only one instance is returned for each key, but the newInstance method can be called several times, in which case the first computed instance is used, and the rest are discarded. Also note that this cache does not delete old instances, but retains all values โ€‹โ€‹indefinitely (in my case, a limited number of instances are used that need to be cached). Reading from ConcurrentHashMap does not use locking, so it must satisfy performance requirements.

+2
source

What about the cache described in Section 5.6 Concurrency In Practice , Brian Goetz ? It is described here .

It uses only classes from the java.util.concurrent package.

The linked article creates a cache and describes the flaws of each version until the final version is an effective cache in which only one parallel thread will calculate the missing element.

I cut and pasted the last code below, but it is worth reading in the article and thinking about the problems outlined. Or better yet - buy a book - that's great.

 import java.util.concurrent.*; public class Memoizer<A, V> implements Computable<A, V> { private final ConcurrentMap<A, Future<V>> cache = new ConcurrentHashMap<A, Future<V>>(); private final Computable<A, V> c; public Memoizer(Computable<A, V> c) { this.c = c; } public V compute(final A arg) throws InterruptedException { while (true) { Future<V> f = cache.get(arg); if (f == null) { Callable<V> eval = new Callable<V>() { public V call() throws InterruptedException { return c.compute(arg); } }; FutureTask<V> ft = new FutureTask<V>(eval); f = cache.putIfAbsent(arg, ft); if (f == null) { f = ft; ft.run(); } } try { return f.get(); } catch (CancellationException e) { cache.remove(arg, f); } catch (ExecutionException e) { // Kabutz: this is my addition to the code... try { throw e.getCause(); } catch (RuntimeException ex) { throw ex; } catch (Error ex) { throw ex; } catch (Throwable t) { throw new IllegalStateException("Not unchecked", t); } } } } } 
+1
source

Source: https://habr.com/ru/post/1302608/


All Articles