I saw the following code, and I think there is a useless while loop in the implementation of the addElement method. There should never be more items except for size + 1, as a write lock already exists. So why does the addElement method remove elements until it receives this condition true
while(concurrentLinkedQueue.size() >=maxSize)
Any pointers around this would be great.
Here is the implementation:
public class LRUCache<K,V> { private ConcurrentLinkedQueue<K> concurrentLinkedQueue = new ConcurrentLinkedQueue<K>(); private ConcurrentHashMap<K,V> concurrentHashMap = new ConcurrentHashMap<K, V>(); private ReadWriteLock readWriteLock = new ReentrantReadWriteLock(); private Lock readLock = readWriteLock.readLock(); private Lock writeLock = readWriteLock.writeLock(); int maxSize=0; public LRUCache(final int MAX_SIZE){ this.maxSize=MAX_SIZE; } public V getElement(K key){ readLock.lock(); try { V v=null; if(concurrentHashMap.contains(key)){ concurrentLinkedQueue.remove(key); v= concurrentHashMap.get(key); concurrentLinkedQueue.add(key); } return v; }finally{ readLock.unlock(); } } public V removeElement(K key){ writeLock.lock(); try { V v=null; if(concurrentHashMap.contains(key)){ v=concurrentHashMap.remove(key); concurrentLinkedQueue.remove(key); } return v; } finally { writeLock.unlock(); } } public V addElement(K key,V value){ writeLock.lock(); try { if(concurrentHashMap.contains(key)){ concurrentLinkedQueue.remove(key); } while(concurrentLinkedQueue.size() >=maxSize){ K queueKey=concurrentLinkedQueue.poll(); concurrentHashMap.remove(queueKey); } concurrentLinkedQueue.add(key); concurrentHashMap.put(key, value); return value; } finally{ writeLock.unlock(); } } }
source share