Redis Internals - LRU implementation for sampling

Does anyone know about the inside of the eviction / removal of Redis LRU.

How does Redis guarantee that old (less used) keys are deleted first (in case we don’t have any volatile keys and we don’t set TTL expiration)?

I know for sure that Redis has a "maxmemory-samples" configuration parameter that controls the sample size that it uses to delete keys, so if you set the sample size to 10, then it tries 10 keys and removes the oldest ones from the number.

What I do not know is whether it is a completely random selection of this key or does it somehow have a mechanism that allows it to automatically select from the equivalent of the “older / less used generation”?

+5
source share
2 answers

This is what I found in antirez.com/post/redis-as-LRU-cache.html - the whole point of using the pattern three algorithm is to save memory. I think this is much more valuable than accuracy, especially since randomized algorithms are rarely understood. Example: a sample with three objects will expire 666 objects from the 999 dataset with an error rate of only 14% compared to the ideal LRU algorithm. And in 14% of the remaining elements, there are hardly any elements in the range of very used elements. Thus, a gain in memory will depend on accuracy without a doubt.

So, although Redis selectively selects (assuming this is not the actual LRU .. and how such an approximation algorithm), the accuracy is relatively high, and increasing the sample size will further increase this. However, if someone needs an accurate LRU (there is zero tolerance for error), then Redis might be the wrong choice.

Architecture ... as they say ... deals with compromises. Therefore, use this approach (Redis LRU) to compromise accuracy for raw performance.

+7
source

Starting with version 3.0.0 (2014), the LRU algorithm uses a pool of 15 keys filled with the best candidates from various samples of N keys (where N is determined by maxmemory-samples ).

Each time it is necessary to evict a key, N new keys are randomly selected and checked in the pool. If they are more suitable candidates (old keys), they are added to it, while the worst candidates (most recent keys) are deleted, keeping the pool at a constant size of 15 keys.

At the end of the round, the best eviction candidate is selected from the pool.

Source: code and comments in the evict.c file from Redis source code.

0
source

Source: https://habr.com/ru/post/1389417/


All Articles