EntityTooSmall in CompleteMultipartUploadResponse

using .NET SDK v.1.5.21.0

I am trying to download a large file (63Mb) and I am following an example:

http://docs.aws.amazon.com/AmazonS3/latest/dev/LLuploadFileDotNet.html

But instead of a helper instead of hole code and using jQuery file upload

https://github.com/blueimp/jQuery-File-Upload/blob/master/basic-plus.html

what I have:

string bucket = "mybucket"; long totalSize = long.Parse(context.Request.Headers["X-File-Size"]), maxChunkSize = long.Parse(context.Request.Headers["X-File-MaxChunkSize"]), uploadedBytes = long.Parse(context.Request.Headers["X-File-UloadedBytes"]), partNumber = uploadedBytes / maxChunkSize + 1, fileSize = partNumber * inputStream.Length; bool lastPart = inputStream.Length < maxChunkSize; // http://docs.aws.amazon.com/AmazonS3/latest/dev/LLuploadFileDotNet.html if (partNumber == 1) // initialize upload { iView.Utilities.Amazon_S3.S3MultipartUpload.InitializePartToCloud(fileName, bucket); } try { // upload part iView.Utilities.Amazon_S3.S3MultipartUpload.UploadPartToCloud(fs, fileName, bucket, (int)partNumber, uploadedBytes, maxChunkSize); if (lastPart) // wrap it up and go home iView.Utilities.Amazon_S3.S3MultipartUpload.CompletePartToCloud(fileName, bucket); } catch (System.Exception ex) { // Huston, we have a problem! //Console.WriteLine("Exception occurred: {0}", exception.Message); iView.Utilities.Amazon_S3.S3MultipartUpload.AbortPartToCloud(fileName, bucket); } 

and

 public static class S3MultipartUpload { private static string accessKey = System.Configuration.ConfigurationManager.AppSettings["AWSAccessKey"]; private static string secretAccessKey = System.Configuration.ConfigurationManager.AppSettings["AWSSecretKey"]; private static AmazonS3 client = Amazon.AWSClientFactory.CreateAmazonS3Client(accessKey, secretAccessKey); public static InitiateMultipartUploadResponse initResponse; public static List<UploadPartResponse> uploadResponses; public static void InitializePartToCloud(string destinationFilename, string destinationBucket) { // 1. Initialize. uploadResponses = new List<UploadPartResponse>(); InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest() .WithBucketName(destinationBucket) .WithKey(destinationFilename.TrimStart('/')); initResponse = client.InitiateMultipartUpload(initRequest); } public static void UploadPartToCloud(Stream fileStream, string destinationFilename, string destinationBucket, int partNumber, long uploadedBytes, long maxChunkedBytes) { // 2. Upload Parts. UploadPartRequest request = new UploadPartRequest() .WithBucketName(destinationBucket) .WithKey(destinationFilename.TrimStart('/')) .WithUploadId(initResponse.UploadId) .WithPartNumber(partNumber) .WithPartSize(maxChunkedBytes) .WithFilePosition(uploadedBytes) .WithInputStream(fileStream) as UploadPartRequest; uploadResponses.Add(client.UploadPart(request)); } public static void CompletePartToCloud(string destinationFilename, string destinationBucket) { // Step 3: complete. CompleteMultipartUploadRequest compRequest = new CompleteMultipartUploadRequest() .WithBucketName(destinationBucket) .WithKey(destinationFilename.TrimStart('/')) .WithUploadId(initResponse.UploadId) .WithPartETags(uploadResponses); CompleteMultipartUploadResponse completeUploadResponse = client.CompleteMultipartUpload(compRequest); } public static void AbortPartToCloud(string destinationFilename, string destinationBucket) { // abort. client.AbortMultipartUpload(new AbortMultipartUploadRequest() .WithBucketName(destinationBucket) .WithKey(destinationFilename.TrimStart('/')) .WithUploadId(initResponse.UploadId)); } } 

my maxChunckedSize is maxChunckedSize (6 * (1024 * 1024)) since I read that at least 5Mb ...

Why am I getting the exception "Your proposed upload is smaller than the minimum allowed size" ? What am I doing wrong?

Error:

 <Error> <Code>EntityTooSmall</Code> <Message>Your proposed upload is smaller than the minimum allowed size</Message> <ETag>d41d8cd98f00b204e9800998ecf8427e</ETag> <MinSizeAllowed>5242880</MinSizeAllowed> <ProposedSize>0</ProposedSize> <RequestId>C70E7A23C87CE5FC</RequestId> <HostId>pmhuMXdRBSaCDxsQTHzucV5eUNcDORvKY0L4ZLMRBz7Ch1DeMh7BtQ6mmfBCLPM2</HostId> <PartNumber>1</PartNumber> </Error> 

How can I get ProposedSize if I pass a stream and stream length?

+6
source share
2 answers

Here is a working solution for the latest Amazon SDK (as of today: v.1.5.37.0 )

Amazon S3 Multipart Upload works like:

  • Initialize the request using client.InitiateMultipartUpload(initRequest)
  • Send pieces of the file (loop to the end) using client.UploadPart(request)
  • Fill out the request using client.CompleteMultipartUpload(compRequest)
  • If something goes wrong, be sure to delete the client and request, and also run the abort command using client.AbortMultipartUpload(abortMultipartUploadRequest)

I keep the client in Session as we need it, for each channel load, save the ETags tag, which is now used to complete the process.


You can see an example and an easy way to do this in Amazon Docs , I got a class to do everything, plus, I integrated with the beautiful jQuery File Upload (handler code below also).

S3MultipartUpload as follows

 public class S3MultipartUpload : IDisposable { string accessKey = System.Configuration.ConfigurationManager.AppSettings.Get("AWSAccessKey"); string secretAccessKey = System.Configuration.ConfigurationManager.AppSettings.Get("AWSSecretKey"); AmazonS3 client; public string OriginalFilename { get; set; } public string DestinationFilename { get; set; } public string DestinationBucket { get; set; } public InitiateMultipartUploadResponse initResponse; public List<PartETag> uploadPartETags; public string UploadId { get; private set; } public S3MultipartUpload(string destinationFilename, string destinationBucket) { if (client == null) { System.Net.WebRequest.DefaultWebProxy = null; // disable proxy to make upload quicker client = Amazon.AWSClientFactory.CreateAmazonS3Client(accessKey, secretAccessKey, new AmazonS3Config() { RegionEndpoint = Amazon.RegionEndpoint.EUWest1, CommunicationProtocol = Protocol.HTTP }); this.OriginalFilename = destinationFilename.TrimStart('/'); this.DestinationFilename = string.Format("{0:yyyy}{0:MM}{0:dd}{0:HH}{0:mm}{0:ss}{0:fffff}_{1}", DateTime.UtcNow, this.OriginalFilename); this.DestinationBucket = destinationBucket; this.InitializePartToCloud(); } } private void InitializePartToCloud() { // 1. Initialize. uploadPartETags = new List<PartETag>(); InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest(); initRequest.BucketName = this.DestinationBucket; initRequest.Key = this.DestinationFilename; // make it public initRequest.AddHeader("x-amz-acl", "public-read"); initResponse = client.InitiateMultipartUpload(initRequest); } public void UploadPartToCloud(Stream fileStream, long uploadedBytes, long maxChunkedBytes) { int partNumber = uploadPartETags.Count() + 1; // current part // 2. Upload Parts. UploadPartRequest request = new UploadPartRequest(); request.BucketName = this.DestinationBucket; request.Key = this.DestinationFilename; request.UploadId = initResponse.UploadId; request.PartNumber = partNumber; request.PartSize = fileStream.Length; //request.FilePosition = uploadedBytes // remove this line? request.InputStream = fileStream; // as UploadPartRequest; var up = client.UploadPart(request); uploadPartETags.Add(new PartETag() { ETag = up.ETag, PartNumber = partNumber }); } public string CompletePartToCloud() { // Step 3: complete. CompleteMultipartUploadRequest compRequest = new CompleteMultipartUploadRequest(); compRequest.BucketName = this.DestinationBucket; compRequest.Key = this.DestinationFilename; compRequest.UploadId = initResponse.UploadId; compRequest.PartETags = uploadPartETags; string r = "Something went badly wrong"; using (CompleteMultipartUploadResponse completeUploadResponse = client.CompleteMultipartUpload(compRequest)) r = completeUploadResponse.ResponseXml; return r; } public void AbortPartToCloud() { // abort. client.AbortMultipartUpload(new AbortMultipartUploadRequest() { BucketName = this.DestinationBucket, Key = this.DestinationFilename, UploadId = initResponse.UploadId }); } public void Dispose() { if (client != null) client.Dispose(); if (initResponse != null) initResponse.Dispose(); } } 

I use DestinationFilename as the destination file, so I can avoid the same name, but I keep the OriginalFilename as needed.

Using jQuery File Upload Plugin, everything works inside the Generic Handler, and the process looks something like this:

 // Upload partial file private void UploadPartialFile(string fileName, HttpContext context, List<FilesStatus> statuses) { if (context.Request.Files.Count != 1) throw new HttpRequestValidationException("Attempt to upload chunked file containing more than one fragment per request"); var inputStream = context.Request.Files[0].InputStream; string contentRange = context.Request.Headers["Content-Range"]; // "bytes 0-6291455/14130271" int fileSize = int.Parse(contentRange.Split('/')[1]);, maxChunkSize = int.Parse(context.Request.Headers["X-Max-Chunk-Size"]), uploadedBytes = int.Parse(contentRange.Replace("bytes ", "").Split('-')[0]); iView.Utilities.AWS.S3MultipartUpload s3Upload = null; try { // ###################################################################################### // 1. Initialize Amazon S3 Client if (uploadedBytes == 0) { HttpContext.Current.Session["s3-upload"] = new iView.Utilities.AWS.S3MultipartUpload(fileName, awsBucket); s3Upload = (iView.Utilities.AWS.S3MultipartUpload)HttpContext.Current.Session["s3-upload"]; string msg = System.String.Format("Upload started: {0} ({1:N0}Mb)", s3Upload.DestinationFilename, (fileSize / 1024)); this.Log(msg); } // cast current session object if (s3Upload == null) s3Upload = (iView.Utilities.AWS.S3MultipartUpload)HttpContext.Current.Session["s3-upload"]; // ###################################################################################### // 2. Send Chunks s3Upload.UploadPartToCloud(inputStream, uploadedBytes, maxChunkSize); // ###################################################################################### // 3. Complete Upload if (uploadedBytes + maxChunkSize > fileSize) { string completeRequest = s3Upload.CompletePartToCloud(); this.Log(completeRequest); // log S3 response s3Upload.Dispose(); // dispose all objects HttpContext.Current.Session["s3-upload"] = null; // we don't need this anymore } } catch (System.Exception ex) { if (ex.InnerException != null) while (ex.InnerException != null) ex = ex.InnerException; this.Log(string.Format("{0}\n\n{1}", ex.Message, ex.StackTrace)); // log error s3Upload.AbortPartToCloud(); // abort current upload s3Upload.Dispose(); // dispose all objects statuses.Add(new FilesStatus(ex.Message)); return; } statuses.Add(new FilesStatus(s3Upload.DestinationFilename, fileSize, "")); } 

Keep in mind that to create a Session object inside a common handler, you need to implement IRequiresSessionState so that your handler looks like this:

 public class UploadHandlerSimple : IHttpHandler, IRequiresSessionState 

Inside fileupload.js (under _initXHRData ) I added an extra header called X-Max-Chunk-Size , so I can pass this to Amazon and calculate if this is the last part of the downloaded file.


Comment freely and make smart changes for everyone.

0
source

I think you did not set the length of the content of the part inside the UploadPartToCloud () function.

0
source

Source: https://habr.com/ru/post/946151/


All Articles