I have the result of a very expensive query. This is a combination of several tables, and the map reduces the work.
This is cached in memcached
for 15 minutes. When the cache expires, the requests are obviously executed, and the cache heats up again.
But at the time of expiration, a thundering herd problem
may occur.
One way to fix this problem that I'm doing right now is to run a scheduled task that starts in the 14th minute. But somehow it looks very appropriate for me.
Another approach that I like is the nginxs proxy_cache_use_stale updating;
mechanism proxy_cache_use_stale updating;
.
The web server / machine continues to deliver obsolete cache, while the thread starts at the expiration date and updates the cache.
Someone applied this to the memcached
script, although I understand that this is a client-side strategy?
If it is profitable, I use Django
.
source share