AWS S3 gracefully handles 403 after getSignedUrl expires

I am trying to gracefully handle 403 when visiting an S3 resource through an expired URL. It currently returns an amz xml error page. I downloaded the 403.html resource and thought that I could redirect to it.

Bucket resources are assets stored / retrieved by my application. However, while reading the documents, I set the properties of the bucket to treat the bucket as a static page of a web page and uploaded the 403.html file to the root directory. All public permissions are blocked, except for GET public access to the 403.html resource. In the bucket properties, website settings, I specified page 403.html as an error page. A visit to http://<bucket>.s3-website-us-east-1.amazonaws.com/some-asset.html correctly redirected to http://<bucket>.s3-website-us-east-1.amazonaws.com/403.html

However, when I use aws-sdk js / node and the call method getSignedUrl('getObject', params) to create a signed URL, it returns another node URL: https://<bucket>.s3.amazonaws.com/ Visiting expired resources from this method is not redirected to 403.html. I assume that since the host address is different, this is the reason it is not being redirected automatically.

I also set static website routing rules for the condition

 <Condition> <HttpErrorCodeReturnedEquals>403</HttpErrorCodeReturnedEquals> </Condition> <Redirect> <ReplaceKeyWith>403.html</ReplaceKeyWith> </Redirect> 

However, this is not a redirect to signed URLs. So I don’t understand how to gracefully handle these deprecated URLs. Any help would be greatly appreciated.

+5
source share
1 answer

S3 buckets have 2 open interfaces, REST and website. This is the difference between the two host names and the difference in the behavior you see.

They have two different sets of functions.

 feature REST Endpoint Website Endpoint ---------------- ------------------- ------------------- Access control yes no, public content only Error messages XML HTML Redirection no yes, bucket, rule, and object-level Request types all supported GET and HEAD only Root of bucket lists keys returns index document SSL yes no 

Source: http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html

So, as you can see from the table, the REST endpoint supports signed URLs, but not friendly errors, while the site endpoint supports friendly errors, but not signed URLs. The two cannot be mixed and juxtaposed, so what you are trying to do is not supported by S3.


I circumvented this limitation by passing all requests to the bucket through HAProxy to the EC2 instance and to the REST endpoint for the bucket.

When error 403 is returned, the proxy modifies the response body of the response using the new Lua built-in interpreter , adding this before the <Error> .

 <?xml-stylesheet type="text/xsl" href="/error.xsl"?>\n 

The /error.xsl file is publicly available and uses XSLT on the browser side to display a fairly reportable response.

The proxy also introduces some additional tags in xml, <ProxyTime> and <ProxyHTTPCode> for use in the output file. The resulting XML is as follows:

 <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="/error.xsl"?> <Error><ProxyTime>2015-10-13T17:36:01Z</ProxyTime><ProxyHTTPCode>403</ProxyHTTPCode><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>9D3E05D20C1BD6AC</RequestId><HostId>WvdkvIRIDMjfa/1Oi3DGVOTR0hABCDEFGHIJKLMNOPQRSTUVWXYZ+B8thZahg7W/I/ExAmPlEAQ=</HostId></Error> 

Then I modify the output shown to the user using XSL tests to determine which S3 error condition chose:

 <xsl:if test="//Code = 'AccessDenied'"> <p>It seems we may have provided you with a link to a resource to which you do not have access, or a resource which does not exist, or that our internal security mechanisms were unable to reach consensus on your authorization to view it.</p> </xsl:if> 

And the end result is as follows:

example browser screenshot for this behavior

The above is a general "Access Denied" because no credentials have been provided. Here is an example of an expired signature.

expired screenshot

I don’t include HostId in the output because it is ugly and noisy, and if I ever need it, the proxy captured and registered it for me, and I can cross-reference the request identifier,

As a bonus, of course, running requests through my proxy server means that I can use my own domain name and my own SSL certificate when serving content in buckets, and I have access logs in real time without delay. When the proxy is in the same region as the bucket, there is no additional charge for an additional data transfer step, and I was very pleased with this setting.

+9
source

Source: https://habr.com/ru/post/1233639/


All Articles