This is what I found in antirez.com/post/redis-as-LRU-cache.html - the whole point of using the pattern three algorithm is to save memory. I think this is much more valuable than accuracy, especially since randomized algorithms are rarely understood. Example: a sample with three objects will expire 666 objects from the 999 dataset with an error rate of only 14% compared to the ideal LRU algorithm. And in 14% of the remaining elements, there are hardly any elements in the range of very used elements. Thus, a gain in memory will depend on accuracy without a doubt.
So, although Redis selectively selects (assuming this is not the actual LRU .. and how such an approximation algorithm), the accuracy is relatively high, and increasing the sample size will further increase this. However, if someone needs an accurate LRU (there is zero tolerance for error), then Redis might be the wrong choice.
Architecture ... as they say ... deals with compromises. Therefore, use this approach (Redis LRU) to compromise accuracy for raw performance.
source share