I use boto3 1.4.4, I upload large files (usually from hundreds of megabytes to several gigabytes) on S3 using S3.Client.upload_file.
boto3 docs state that botocore handles retries for default streaming downloads.
- Repetitions. While botocore handles retries for streaming downloads, it cannot handle retries for streaming downloads. This module handles retries for both cases, so you don't need to implement retry logic yourself.
However, I was looking through the source code , and I could not find any evidence that the retries are actually processed for loading, and vice versa, the retry logic for loading (using download_fileor download_fileobj) is explicit and obvious , so I am not able to whether or not retry attempts are being made at boot time.
The following is a partial stack trace from a failed load, so I ask the question first:
File "/usr/local/lib/python2.7/dist-packages/boto3/s3transfer/__init__.py", line 642, in upload_file
self._multipart_upload (filename, bucket, key, callback, extra_args)
File "/usr/local/lib/python2.7/dist-packages/boto3/s3transfer/__init__.py", line 739, in _multipart_upload
uploader.upload_file (filename, bucket, key, callback, extra_args)
File "/usr/local/lib/python2.7/dist-packages/boto3/s3transfer/__init__.py", line 393, in upload_file
filename, '/'.join([bucket, key]), e))
S3UploadFailedError: Failed to upload [file] to [bucket]: (104, 'ECONNRESET')
Does boto3 provide some guarantees for reloading downloads, and if so, where is this logic implemented / documented?
Currently, I decided to use the retrying package for callbacksupload_file