I store strings of the order of 150M. This is good - the maximum row size in Redis, but I see many different, conflicting opinions regarding the approach I have to take, and no clear path.
On the one hand, I saw that I should use a hash with small pieces of data, and on the other hand, I was told that this leads to a break, and that saving the entire string is most efficient.
On the one hand, I saw that I can go to one massive line or do a bunch of operations with adding lines to create it. The latter seems to be more effective than the former.
I am reading data from other sources, so I would prefer not to populate the local physical file so that I can transfer the whole line. Obviously, it would be better if I could break the input and pass it to Redis via appends. However, if this is inefficient with Redis, it may take forever to feed all the data in one piece at a time. I would try, but I lack experience, and it can be ineffective for a number of reasons.
Saying there is a lot of talk about โsmallโ lines and โlargeโ lines, but it is not clear what Radish considers optimally a โsmallโ line. 512K, 1M, 8M?
Does anyone have any final comments?
I would love it if I could just provide a file-like object or generator for redis-py, but it depends more on the language than I meant that this question will be, and most likely, impossible for the protocol, anyway : it Anyway, internal data partitioning is just required, when it's probably best to just overlay it on the developer.