How to implement Redis pipelined queries using Booksleeve?

I am a bit involved in the difference between a Redis transaction and a pipeline, and ultimately how to use pipelines with Booksleeve. I see that Booksleeve supports the Redis transaction function ( MULTI / EXEC ), but there is no mention of pipelining function in its API / tests. However, in other implementations, it is clear that there is a difference between pipelines and transactions, namely atomicity , as evidenced by the version of redis-ruby below, but in some, the terms seem to be used interchangeably.

redis-ruby implementation:

 r.pipelined { # these commands will be pipelined r.get("insensitive_key") } r.multi { # these commands will be executed atomically r.set("sensitive_key") } 

I would just use MULTI / EXEC , but they seem to block all other users until the transaction is completed (in my case, this is not necessary), so I'm worried about their performance. Has anyone used pipelines with Booksleeve or any ideas on how to implement them?

+6
source share
3 answers

In BookSleeve, everything is always pipelined. There are no synchronous operations. No one. Thus, each operation returns some form of Task (may be vanilla Task , may be Task<string> , Task<long> , etc.), which at some point in the future (that is, when redis answers) will be to have value. You can use Wait in your code to execute a synchronous wait, or ContinueWith / await (C # 5 language feature) to perform an asynchronous callback.

Transactions are no different; they are conveyor. The only subtle change in transactions is that they are additionally buffered on the call site until completion (since this is a multiplexer, we cannot start pipeline messages related to transactions until we have a full unit of work, since this will adversely affect other callers on the same multiplexer).

So: the reason for the lack of an explicit .pipelined is that everything is pipelined and asynchronous.

+6
source

Pipelining is a communication strategy at the protocol level and has nothing to do with atomicity. It is completely orthogonal to the concept of "transaction". (For example, you can use MULTI .. EXEC in a pipelined connection.)

What is pipelining?

The simplest connector for redis is a synchronous client interacting with a response-response. The client sends a request and then waits for a response from Redis before sending the next request.

In pipelining, a client can send requests without a pause to see the Redis response for each request. Redis is, of course, a single-threaded server and a natural serialization point, and therefore the order of the request is saved and reflected in the order of the response. This means that the client can have one thread sending requests (usually by removing requests from the queue), and the other thread is constantly processing responses from Redis. Please note that, of course, you can still use pipelining with a single threaded client, but you lose some of the efficiency. The dual-threaded model allows full use of the local processor and network bandwidth (for example, saturation).

If you are still following this, you should ask yourself: well, how do the requests and answers match on the client side? Good question! There are various ways to approach this. In JRedis, I wrap requests in a (java) Future object to handle the asynchrony of request / response processing. Each time a request is sent, the corresponding Future object is wrapped in a pending response object and placed in the queue. The response listener simply removes one element from this queue at a time and parses the response (stream) and updates the future object.

Now the end user of the client can be exposed to a synchronous or asynchronous interface. If the interface is synchronous, the implementation, of course, should block the "Future" response.

If you have followed so far, then it should be clear that a single-threaded application using synchronous semantics with pipelining strikes the whole purpose of pipelining (since the application blocks the response and is not busy submitting additional client requests.) But if the application is multithreaded, the synchronous interface to the pipeline allows you to use one connection when processing N threads of client applications. (So, here is an implementation strategy to help create a thread-safe connection.)

If the interface to the pipeline is asynchronous, then even a single-threaded client application can come in handy. Throughput is increased by at least an order of magnitude.

(Caveats with pipelining: It's not trivial to write a fail-safe pipelining client.)

Ideally, I should use a chart, but pay attention to what happens at the end of the clip: http://www.youtube.com/watch?v=NeK5ZjtpO-M

+4
source

Here is a link to Redis Transactions Documentation

For BookSleeve, refer to this post from Mark.

"CreateTransaction () creates an intermediate area for creating commands (using exactly the same API) and captures future results. Then, when the Execute () function is called, the buffered commands are collected in the MULTI / EXEC block and sent down (the multiplexer will send everything together, obviously )

If you create your commands inside a transaction, they will automatically be pipelined.

+1
source

Source: https://habr.com/ru/post/903138/


All Articles