The best way to save a large string in Redis ... Getting mixed signals

I store strings of the order of 150M. This is good - the maximum row size in Redis, but I see many different, conflicting opinions regarding the approach I have to take, and no clear path.

On the one hand, I saw that I should use a hash with small pieces of data, and on the other hand, I was told that this leads to a break, and that saving the entire string is most efficient.

On the one hand, I saw that I can go to one massive line or do a bunch of operations with adding lines to create it. The latter seems to be more effective than the former.

I am reading data from other sources, so I would prefer not to populate the local physical file so that I can transfer the whole line. Obviously, it would be better if I could break the input and pass it to Redis via appends. However, if this is inefficient with Redis, it may take forever to feed all the data in one piece at a time. I would try, but I lack experience, and it can be ineffective for a number of reasons.

Saying there is a lot of talk about โ€œsmallโ€ lines and โ€œlargeโ€ lines, but it is not clear what Radish considers optimally a โ€œsmallโ€ line. 512K, 1M, 8M?

Does anyone have any final comments?

I would love it if I could just provide a file-like object or generator for redis-py, but it depends more on the language than I meant that this question will be, and most likely, impossible for the protocol, anyway : it Anyway, internal data partitioning is just required, when it's probably best to just overlay it on the developer.

+4
source share
1 answer

One of the options:

Data storage as a long list of blocks

  • store data in a list - this allows you to store content as a sequence of fragments, as well as desctroying the entire list in one step
  • save the data using pipelinethe contenxt manager to make sure that you are the only one writing at this moment.
  • , Redis , . , , , (. ).

, , - , , , Redis, . , , , .

+2

Source: https://habr.com/ru/post/1545314/


All Articles