Redis sharding, conveyor processing and round trips

Suppose you need to make several redis calls in your web application to display a page, for example, to get a bunch of user hashes. To speed this up, you can wrap your redis commands in the MULTI / EXEC section, thereby using pipelining to avoid multilateral travel. But you also want to outline your data, because you have a lot of them and / or you want to distribute records. Then, pipelining will not work, because different keys can potentially work on different nodes, if you do not have a clear idea of ​​the data layout of your application and the fragment based on roles, and not when using a hash function. So, what are the best methods for processing data on different servers without undue performance degradation due to the fact that many servers communicate to complete a “conceptually unique” job? I believe that the answer depends on the web application you are developing, and I will eventually conduct some tests, but it would be useful to find out how others handled the compromises that I spoke of.

+6
source share
1 answer

MULTI / EXEC and pipelining are two different things. You can do MULTI / EXEC without pipelining and vice versa.

If you want to outline the pipeline at the same time, you need to group the operations into a pipeline per instance of Redis, and then use pipelining for each instance.

Here is a simple Ruby usage example: https://gist.github.com/2587593

One way to further improve performance is to parallelize traffic in Redis instances after grouping operations (i.e. group operations, you send them to all instances in parallel, you are waiting for responses from all instances).

This is a bit more complicated because an asynchronous non-blocking client is required. For maximum performance, C / C ++ should be used on the client side. This can be easily implemented using the hiredis + event loop of your choice.

+4
source

Source: https://habr.com/ru/post/915280/


All Articles