Since Java 7 (published back in July 2011), this is the best way: Files.copy() from java.util.nio.file .
Copies all bytes from the input stream to a file.
Thus, you do not need an external library and do not flip your own byte byte bytes. Two examples below, both of which use the input stream from S3Object.getObjectContent() .
InputStream in = s3Client.getObject("bucketName", "key").getObjectContent();
1) Writing to a new file at the specified path:
Files.copy(in, Paths.get("/my/path/file.jpg"));
2) Write to the temp file in the system default location tmp:
File tmp = File.createTempFile("s3test", ""); Files.copy(in, tmp.toPath(), StandardCopyOption.REPLACE_EXISTING);
(Without specifying an option to replace an existing file, you will get a FileAlreadyExistsException .)
Also note that getObjectContent() Javadocs urges you to close the input stream :
If you are extracting S3Object, you should close this input stream as soon as possible, since the contents of the object are not buffered by the memory and stream directly from Amazon S3. In addition, failure to close this thread may block the request pool.
Thus, it should be safer to wrap everything in try-catch-finally and do in.close(); in the finally block.
The above assumes that you are using the official Amazon SDK ( aws-java-sdk-s3 ).