Lambda S3 Put function does not start for large files

I am currently studying storing email attachments separately from the .eml file itself. I have an SES rule set that sends incoming email to a bucket. When the bucket receives the email, the S3 Put Lambda function analyzes the raw email (MIME format), base64 decodes the attachment buffers and makes putObject for each attachment and source .eml file in the new bucket.

My problem is that this lambda function does not work for letters with attachments exceeding ~ 3-4 MB. The letter is received and stored in the initial bucket, but the function does not start when it is received. In addition, the event does not appear in CloudWatch. However, this feature works great when manually checking it with the hard-coded S3 Put payload, as well as when manually loading the .eml file into the designated bucket.

Do you have an idea why there is such a restriction? Perhaps this is a permission problem with a bucket or perhaps a problem with the assigned Lambda role? In manual testing, Ive found that this in no way meant a timeout or exceeding the maximum memory.

+7
source share
3 answers

Large files are almost certainly uploaded via S3 Multipart Upload instead of the usual Put operation. You need to set up a Lambda subscription to also receive notifications about downloading multi-page files. It seems that the function is only subscribed to s3:ObjectCreated:Putevents at the moment, and you need to add s3:ObjectCreated:CompleteMultipartUploadto the configuration.

+17
source

I ran into the same problem. If the Etag file that you uploaded to S3 ends with a hyphen and then with a number, then this means that the file was uploaded using Multipart. Subscribing to the CompleteMultipartUpload event solved the problem.

+2
source

. s3: ObjectCreated: CompleteMultipartUpload , .

I later realized that the problem was with the lambda wait period. This can also be a potential problem.

0
source

Source: https://habr.com/ru/post/1659642/


All Articles