The most efficient algorithm for compressing a file folder

I have a folder with files and you want to compress it as efficiently as possible without loss.

Files are very similar to each other in that the main payload is exactly the same, but variable-sized headers and footers may vary slightly between files.

I need to have access to any of the files very quickly, and also add additional files very quickly (no need to unzip the entire folder to add the file again for re-compression). Removing from a folder is not very common.

Algorithmic sentences are good, although I would prefer to just use some existing library / program for this task.

+3
source share
3 answers

, , . , . , , 3 :

1.dat
2.dat
3.dat

:

payload.dat
1.header.dat
1.footer.dat
2.header.dat
2.footer.dat
3.header.dat
3.footer.dat

, Zip 7zip , . 7zip, , , .

+3

- 7zip, , - .

+1

With such redundant data, most standard compression programs should give very satisfactory results. DO NOT use the standard .zip generator for Windows because it compresses each file separately. 7zip or Gzip are great for this.

0
source

Source: https://habr.com/ru/post/1793747/


All Articles