Lock and replay

We have 75 (and growing) servers that need to communicate through Redis. All 75 servers ideally would like to write to two fields in Redis using INCRBYFLOAT operations. We expect to end up potentially with millions of daily writes and billions of daily reads in these two areas. This data must be persistent.

We are worried that a Redis lock may lead to repeated attempts to write while trying to increase the same field.

Questions:

  • Multiple, simultaneous INCRBYFLOAT in one field is a bad idea at a very heavy load?
  • Should we have an external process to “summarize” individual fields and instead write two fields? (this introduces another point of failure)
  • Will these two fields be read during recording?
+6
source share
2 answers

Redis is not blocked. In addition, it is single-threaded; therefore there are no race conditions. Reading or writing is not blocked.

You can run millions of INCRBYFLOAT on the same key without any problems. No need for external processes. Reading these fields does not cause any problems.

It says, “Millions of updates to two keys” sounds weird. If you can explain your use case, there may perhaps be a better way to deal with it in Redis.

+9
source

Since Redis is single-threaded, you probably want to use master-slave replication to separate records from reads, since yes, records tend to block reading.

Alternatively, you can use Apache Zookeeper to do this, it provides reliable cluster coordination without separate points of failure (for example, one Redis instance).

+3
source

Source: https://habr.com/ru/post/916078/


All Articles