Permissions for Amazon S3 files, access is denied when copying from another account

I have a set of video files that were copied from one AWS bucket from another account to my account in my own bucket.

Now I am having a problem with all the files in which I get access-related errors when I try to make all the files public.

In particular, I went to my AWS account, switched to S3, expanded the folder structure to find one of the video files.

When I look at this particular file, the Permissions tab in the files does not grant any rights assigned to anyone. No users, groups, or system permissions assigned.

At the bottom of the Permissions tab, I see a small frame that says "Error: access denied." I can’t change anything about the file. I can not add metadata. I can not add the user to the file. I can not make the file Public.

Is there a way to gain control over these files so that I can make them publicly available? There are over 15,000 files / about 60 GB files. I would like to avoid downloading and re-downloading all files.

With some help and suggestions from people here I tried the following. I created a new folder in my bucket called "media".

I tried this command:

aws s3 cp s3://mybucket/2014/09/17/thumb.jpg s3://mybucket/media --grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers full=emailaddress=my_aws_account_email_address 

I get a fatal error 403 when I call the HeadObject: Forbidden operation.

+13
source share
6 answers

Very interesting puzzle! Fortunately, there is a solution.

First, recap:

  • Bucket A on account A
  • Bucket B in account B
  • The user in account A copies the objects to Bucket B (by obtaining the appropriate permissions for this)
  • Objects in bucket B still belong to account A, but not accessible by account B

I was able to reproduce this and confirm that the users in account B cannot access the file - even the root user in Account B!

Fortunately, everything can be fixed. The aws s3 cp command in the AWS command line interface can update file permissions when copying with the same name. However, to cause this, you also need to update something else, otherwise you will get this error:

This copy request is illegal because it tries to copy the object by itself without changing the object’s metadata, storage class, website redirect location, or encryption attributes.

Therefore, permissions can be updated with this command:

 aws s3 cp s3://my-bucket/ s3://my-bucket/ --recursive --acl bucket-owner-full-control --metadata "One=Two" 
  • Must be run by an account A user who has permission to access objects (for example, the user who originally copied the objects to Bucket B)
  • The content of the metadata does not matter, but you must force the update
  • --acl bucket-owner-full-control will grant account B permission so you can use objects as usual.

End result: A bucket that you can use!

+21
source
 aws s3 cp s3://account1/ s3://accountb/ --recursive --acl bucket-owner-full-control 
+1
source

In case someone tries to do the same, but uses the Hadoop / Spark job instead of the AWS CLI.

  • Step 1. Grant the user in the account the appropriate permissions to copy objects to Bucket B. (mentioned in the answer above)
  • Step 2: Set the fs.s3a.acl.default configuration parameter using Hadoop Configuration. This can be set in the conf file or in a program:

    Conf file:

    <property> <name>fs.s3a.acl.default</name> <description>Set a canned ACL for newly created and copied objects. Value may be Private, PublicRead, PublicReadWrite, AuthenticatedRead, LogDeliveryWrite, BucketOwnerRead, or BucketOwnerFullControl.</description> <value>$chooseOneFromDescription</value> </property>

    Program:

    spark.sparkContext.hadoopConfiguration.set("fs.s3a.acl.default", "BucketOwnerFullControl")

+1
source

To correctly set the appropriate permissions for newly added files, add this recycle bin policy:

 [...] { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::123456789012::user/their-user" }, "Action": [ "s3:PutObject", "s3:PutObjectAcl" ], "Resource": "arn:aws:s3:::my-bucket/*" } 

And set the ACL for the newly created files in the code. Python example:

 import boto3 client = boto3.client('s3') local_file_path = '/home/me/data.csv' bucket_name = 'my-bucket' bucket_file_path = 'exports/data.csv' client.upload_file( local_file_path, bucket_name, bucket_file_path, ExtraArgs={'ACL':'bucket-owner-full-control'} ) 

source: https://medium.com/artificial-industry/how-to-download-files-that-others-put-in-your-aws-s3-bucket-2269e20ed041 (disclaimer: written by me)

+1
source

I am afraid that you will not be able to transfer ownership of the property as you wish. Here is what you did:

The old account copies the objects to the new account.

The β€œright” way to do this (if you want to take ownership of a new account):

The new account copies objects from the old account.

See the small but important difference? S3 docs explain it.

I think you can be without it, without having to download all this, simply copying all the files in one bucket and then deleting the old files. Make sure that you can change the permissions after the copy is completed. It should also save you money, since you don’t have to pay for data transfer for everything.

0
source

putting

--acl full control of the bucket owner made it work.

0
source

Source: https://habr.com/ru/post/1267359/


All Articles