The java.nio package has a great way to process zip files, treating them like file systems. This allows us to process the contents of a zip file like regular files. Thus, the firmware of the entire folder can be achieved simply by using Files.copy to copy all the files to a zip file. Since the subfolders must also be copied, we need a visitor:
private static class CopyFileVisitor extends SimpleFileVisitor<Path> { private final Path targetPath; private Path sourcePath = null; public CopyFileVisitor(Path targetPath) { this.targetPath = targetPath; } @Override public FileVisitResult preVisitDirectory(final Path dir, final BasicFileAttributes attrs) throws IOException { if (sourcePath == null) { sourcePath = dir; } else { Files.createDirectories(targetPath.resolve(sourcePath .relativize(dir).toString())); } return FileVisitResult.CONTINUE; } @Override public FileVisitResult visitFile(final Path file, final BasicFileAttributes attrs) throws IOException { Files.copy(file, targetPath.resolve(sourcePath.relativize(file).toString()), StandardCopyOption.REPLACE_EXISTING); return FileVisitResult.CONTINUE; } }
This is a simple recursively "recursive" visitor. It is used to copy recursively. However, with the help of ZipFileSystem we can also use it to copy a directory to a zip file, for example:
public static void zipFolder(Path zipFile, Path sourceDir) throws ZipException, IOException {
This is what I call the elegant way of zipping an entire folder. However, when using this method in a huge folder (about 3 GB) I get OutOfMemoryError (empty space). When using the normal zip processing library, this error does not occur. Thus, it seems that the ZipFileSystem method handles the copy is very inefficient: too many files that need to be written are stored in memory, so OutOfMemoryError appears.
Why is this so? Is the ZipFileSystem as a rule inefficient (in terms of memory consumption), or am I doing something wrong here?
java zip nio
gexicide May 25 '14 at 18:42 2014-05-25 18:42
source share