Factory singleton objects: is this code safe?

I have a common interface for a number of singleton implementations. The interface defines an initialization method that can throw a checked exception.

I need a factory that, on request, will return cached singleton implementations, and wonder if the following approach is thread safe?

UPDATE1: Please do not offer any third partly libraries, as this will require legal permission due to possible licensing problems :-)

UPDATE2: this code is likely to be used in an EJB environment, so it prefers not to create additional threads or use such things.

interface Singleton { void init() throws SingletonException; } public class SingletonFactory { private static ConcurrentMap<String, AtomicReference<? extends Singleton>> CACHE = new ConcurrentHashMap<String, AtomicReference<? extends Singleton>>(); public static <T extends Singleton> T getSingletonInstance(Class<T> clazz) throws SingletonException { String key = clazz.getName(); if (CACHE.containsKey(key)) { return readEventually(key); } AtomicReference<T> ref = new AtomicReference<T>(null); if (CACHE.putIfAbsent(key, ref) == null) { try { T instance = clazz.newInstance(); instance.init(); ref.set(instance); // ----- (1) ----- return instance; } catch (Exception e) { throw new SingletonException(e); } } return readEventually(key); } @SuppressWarnings("unchecked") private static <T extends Singleton> T readEventually(String key) { T instance = null; AtomicReference<T> ref = (AtomicReference<T>) CACHE.get(key); do { instance = ref.get(); // ----- (2) ----- } while (instance == null); return instance; } } 

I'm not quite sure about lines (1) and (2). I know that the reference object is declared as a mutable field in AtomicReference , and therefore the changes made in line (1) should immediately appear in line (2) - but still there are some doubts ...

Other than that - I think using ConcurrentHashMap addresses the atomicity of entering a new key into the cache.

Do you guys see any problems with this approach? Thanks!

PS: I know about a static class of classes and do not use it because of the ExceptionInInitializerError (which ends with any exception created when creating a single-line instance) and the subsequent NoClassDefFoundError , which is not what I want to catch. Instead, I would like to take advantage of the highlighted checked exception by catching it and processing it gracefully, rather than analyzing the EIIR or NCDFE stack trace.

+6
source share
6 answers

Having all of these parallel / atomic things will cause more lock issues than just placing

 synchronized(clazz){} 

blocks getter. Atomic links are for links that are UPDATED and you don't want a collision. Here you have one writer, so you don't care.

You can optimize it further with a hash map, and only if there is a miss, use the synchronized block:

 public static <T> T get(Class<T> cls){ // No lock try T ref = cache.get(cls); if(ref != null){ return ref; } // Miss, so use create lock synchronized(cls){ // singletons are double created synchronized(cache){ // Prevent table rebuild/transfer contentions -- RARE // Double check create if lock backed up ref = cache.get(cls); if(ref == null){ ref = cls.newInstance(); cache.put(cls,ref); } return ref; } } } 
+3
source

You went to great lengths to avoid synchronization, and I guess the reason for this is related to performance issues. Have you tried to check if this really improves performance compared to a synchronized solution?

The reason I ask is because parallel classes tend to be slower than non-competitive classes, not to mention an additional level of redirects using atomic links. Depending on the conflict of your threads, a naive, synchronized solution may be faster (and easier to verify).

Also, I think you can end the endless loop when a SingletonException is thrown while calling instance.init (). The reason is that a parallel thread waiting on readEventually will never be able to find its instance (since an exception was thrown when another thread initialized the instance). Maybe this is the right behavior for your case, or maybe you want to set some special value for the instance to throw an exception that should be thrown into the waiting thread.

+3
source

Consider using Guava CacheBuilder . For instance:

 private static Cache<Class<? extends Singleton>, Singleton> singletons = CacheBuilder.newBuilder() .build( new CacheLoader<Class<? extends Singleton>, Singleton>() { public Singleton load(Class<? extends Singleton> key) throws SingletonException { try { Singleton singleton = key.newInstance(); singleton.init(); return singleton; } catch (SingletonException se) { throw se; } catch (Exception e) { throw new SingletonException(e); } } }); public static <T extends Singleton> T getSingletonInstance(Class<T> clazz) { return (T)singletons.get(clazz); } 

Note. This example is untested and not compiled.

The Guava implementation underlying Cache will handle all caching and concurrency logic for you.

0
source

It seems like this will work, although I might think of some kind of dream, even if there is a nanosecond or something when testing for reference. The spin test cycle will be extremely expensive.

Also, I would consider improving the code by passing AtomicReference to readEventually() so that you can avoid the containsKey() race conditions and then putIfAbsent() . Thus, the code will look like this:

 AtomicReference<T> ref = (AtomicReference<T>) CACHE.get(key); if (ref != null) { return readEventually(ref); } AtomicReference<T> newRef = new AtomicReference<T>(null); AtomicReference<T> oldRef = CACHE.putIfAbsent(key, newRef); if (oldRef != null) { return readEventually(oldRef); } ... 
0
source

Code is usually not thread safe because there is a gap between checking CACHE.containsKey(key) and calling CACHE.putIfAbsent(key, ref) . It is possible that two threads will invoke the method at the same time (especially on multi-core / processor systems), and both will perform a containsKey() check, then both will try to put and create.

I would protect this execution of the getSingletonInstnace() method using either lock or synchronization on some monitor.

-1
source

google "Memoizer". basically, instead of AtomicReference use Future .

-1
source

Source: https://habr.com/ru/post/902776/


All Articles