How to detect Redis memory leak?

Starting from yesterday, our redis servers gradually (200 MB / hour) use more memory, and the number of keys (330 thousand) and their data (132 MB redis-rdb-tools ) remain approximately the same.

Information output redis-cli shows 6.89G used memory ?!

redis_version:2.4.10 redis_git_sha1:00000000 redis_git_dirty:0 arch_bits:64 multiplexing_api:epoll gcc_version:4.4.6 process_id:3437 uptime_in_seconds:296453 uptime_in_days:3 lru_clock:1905188 used_cpu_sys:8605.03 used_cpu_user:1480.46 used_cpu_sys_children:1035.93 used_cpu_user_children:3504.93 connected_clients:404 connected_slaves:0 client_longest_output_list:0 client_biggest_input_buf:0 blocked_clients:0 used_memory:7400076728 used_memory_human:6.89G used_memory_rss:7186984960 used_memory_peak:7427443856 used_memory_peak_human:6.92G mem_fragmentation_ratio:0.97 mem_allocator:jemalloc-2.2.5 loading:0 aof_enabled:0 changes_since_last_save:1672 bgsave_in_progress:0 last_save_time:1403172198 bgrewriteaof_in_progress:0 total_connections_received:3616 total_commands_processed:127741023 expired_keys:0 evicted_keys:0 keyspace_hits:18817574 keyspace_misses:8285349 pubsub_channels:0 pubsub_patterns:0 latest_fork_usec:1619791 vm_enabled:0 role:slave master_host:***BLOCKED*** master_port:6379 master_link_status:up master_last_io_seconds_ago:0 master_sync_in_progress:0 db0:keys=372995,expires=372995 db6:keys=68399,expires=68399 

The problem started when we upgraded our (.net) client code from BookSleeve 1.1.0.4 to ServiceStack v3.9.71 to prepare for the upgrade to Redis 2.8. But much more has been updated to. And our session state store (also redis, but with the harbor client) does not show the same symptoms.

Where is everything that happens in Radish's memory? How can I eliminate its use?

Edit: I just restarted this instance, and the memory returned to 350M and is now rising again. The largest 10 largest objects are still of the same size, from 100 K to 25 M for No. 1. The number of keys has dropped to 270 thousand (Previously by 330 thousand).

+6
source share
1 answer

Here are some sources of β€œhidden” memory consumption in Redis:

  • Mark has already mentioned the buffers supported by the master to feed the slave. If the slave device is behind its master (because it works in a slower field, for example), then some memory will be consumed by the master.

  • when long commands are detected, Redis writes them to the SLOWLOG area, which takes up some memory. You can use the SLOWLOG LEN command to check the number of entries that you have here.

    Communication buffers
  • can also take memory. As far as I remember, with the old versions of Redis (and 2.4 is quite old - you really have to upgrade), it was unlimited, which means that if you pass a large object to a point, the clipboard associated with this client connection will grow and never don't shrink. If there are many clients who deal with large objects sometimes, this may be a possible explanation. If you use commands that extract very large data from Redis (in one shot), that might be the explanation. For example, the simple KEYS * command used on a Redis server that stores millions of keys will consume a significant amount of memory.

You mentioned that you have objects up to 25 MB in size. You have 404 client connections, if each of them needs to access such objects at a certain point in time, it will consume 10 GB of memory.

+5
source

Source: https://habr.com/ru/post/971045/


All Articles