How can I get around this obvious EhCache deadlock?

Using ehCache 2.4.4, I seem to be at a dead end on the ehCache segment. From other protocols, I know that the "waiting thread", 1694 last worked 9 hours before the stack trace was created. At the same time, 1696 left and did a lot of other work, so this castle definitely holds strangely.

I am sure that I do not directly block any instances of the Segment, so I assume that this is some kind of problem internal to the library. Any ideas?

"Model Executor - 1696" Id=1696 in TIMED_WAITING on lock=java.u til.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@ 92eb1ed at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(Unknown Source) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(Unknown Source) at java.util.concurrent.PriorityBlockingQueue.poll(Unknown Source) at com.rtrms.application.modeling.local.BlockingTaskList.takeTask(BlockingTaskList.java:20) at com.rtrms.application.modeling.local.ModelExecutor.executeNextTask(ModelExecutor.java:71) at com.rtrms.application.modeling.local.ModelExecutor.run(ModelExecutor.java:46) Locked synchronizers: count = 1 - java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@4a 3d767f "Model Executor - 1694" Id=1694 in WAITING on loc k=java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@ 4a3d767f owned by Model Executor - 1696 Id=1696 at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(Unknown Source) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(Unknown Source) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(Unknown Source) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(Unknown Source) at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(Unknown Source) at net.sf.ehcache.store.compound.Segment.unretrievedGet(Segment.java:248) at net.sf.ehcache.store.compound.CompoundStore.unretrievedGet(CompoundStore.java:191) at net.sf.ehcache.store.compound.impl.DiskPersistentStore.containsKeyInMemory(DiskPersistentStore.java:72) at net.sf.ehcache.Cache.searchInStoreWithStats(Cache.java:1884) at net.sf.ehcache.Cache.get(Cache.java:1549) at com.rtrms.amoeba.cache.DistributedModeledSecurities.get(DistributedModeledSecurities.java:57) at com.rtrms.amoeba.modeling.AssertPersistedModeledSecurities.get(AssertPersistedModeledSecurities.java:44) at com.rtrms.application.modeling.tasks.ExpandableModelingTask.getNextUnexecutedTask(ExpandableModelingTask.java:35) at com.rtrms.application.modeling.local.BlockingTaskList.takeTask(BlockingTaskList.java:36) at com.rtrms.application.modeling.local.ModelExecutor.executeNextTask(ModelExecutor.java:71) at com.rtrms.application.modeling.local.ModelExecutor.run(ModelExecutor.java:46) Locked synchronizers: count = 0 
+6
source share
1 answer

It turns out that calls like Cache.acquireWriteLockOnKey end up getting a lock on the inner segment, so this seeming dead end was caused by a .unlock call that wasn't in the finally block.

Editorial comment: This also implies that you can gain competition by trying to block two different keys that have just ended up in the same segment, which is pretty bad.

+1
source

Source: https://habr.com/ru/post/900956/


All Articles