Optimal size for zlib compression?

I saw threads about minimum and maximum size for zlib compression. I wanted to know what people consider the optimal size for a compressed data block that will provide the best speed. Is there an advantage to splitting a file into multiple file blocks.

Thanks.

+4
source share
1 answer

Dividing the data into blocks will reduce the compression ratio and is unlikely to improve the speed.

The basic idea of โ€‹โ€‹"splitting into small blocks" is to improve access : let's say you want to read a file segment at the PX position, then you will immediately know that it is stored in the block BY = PX / Block size. Therefore, instead of decoding the entire file, you only decode the block.

What is it. If you are looking for better speed, you will have to use some other compression algorithm, such as Snappy or LZ4 , which are known to compress and decompress several times faster than zlib.

+4
source

Source: https://habr.com/ru/post/1391037/


All Articles