Pipelining is a communication strategy at the protocol level and has nothing to do with atomicity. It is completely orthogonal to the concept of "transaction". (For example, you can use MULTI .. EXEC in a pipelined connection.)
What is pipelining?
The simplest connector for redis is a synchronous client interacting with a response-response. The client sends a request and then waits for a response from Redis before sending the next request.
In pipelining, a client can send requests without a pause to see the Redis response for each request. Redis is, of course, a single-threaded server and a natural serialization point, and therefore the order of the request is saved and reflected in the order of the response. This means that the client can have one thread sending requests (usually by removing requests from the queue), and the other thread is constantly processing responses from Redis. Please note that, of course, you can still use pipelining with a single threaded client, but you lose some of the efficiency. The dual-threaded model allows full use of the local processor and network bandwidth (for example, saturation).
If you are still following this, you should ask yourself: well, how do the requests and answers match on the client side? Good question! There are various ways to approach this. In JRedis, I wrap requests in a (java) Future object to handle the asynchrony of request / response processing. Each time a request is sent, the corresponding Future object is wrapped in a pending response object and placed in the queue. The response listener simply removes one element from this queue at a time and parses the response (stream) and updates the future object.
Now the end user of the client can be exposed to a synchronous or asynchronous interface. If the interface is synchronous, the implementation, of course, should block the "Future" response.
If you have followed so far, then it should be clear that a single-threaded application using synchronous semantics with pipelining strikes the whole purpose of pipelining (since the application blocks the response and is not busy submitting additional client requests.) But if the application is multithreaded, the synchronous interface to the pipeline allows you to use one connection when processing N threads of client applications. (So, here is an implementation strategy to help create a thread-safe connection.)
If the interface to the pipeline is asynchronous, then even a single-threaded client application can come in handy. Throughput is increased by at least an order of magnitude.
(Caveats with pipelining: It's not trivial to write a fail-safe pipelining client.)
Ideally, I should use a chart, but pay attention to what happens at the end of the clip: http://www.youtube.com/watch?v=NeK5ZjtpO-M