How can I make sure my users are downloading a new version of my S3 file?

This is inside the bash file:

s3cmd --add-header='Content-Encoding':'gzip' put /home/media/main.js s3://myproject/media/main.js 

This is what I do to upload my compressed database file to Amazon S3. I run this command every time I make changes to javascript files.

However, when I refresh the page in Chrome, Chrome still uses the cached version.

Request Headers:

 Accept:*/* Accept-Encoding:gzip, deflate, sdch Accept-Language:en-US,en;q=0.8,es;q=0.6 AlexaToolbar-ALX_NS_PH:AlexaToolbar/alxg-3.3 Cache-Control:max-age=0 Connection:keep-alive Host:myproject.s3.amazonaws.com If-Modified-Since:Thu, 04 Dec 2014 09:21:46 GMT If-None-Match:"5ecfa32f291330156189f17b8945a6e3" User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36 

Answer headers:

 Accept-Ranges:bytes Content-Encoding:gzip Content-Length:70975 Content-Type:application/javascript Date:Thu, 04 Dec 2014 09:50:06 GMT ETag:"85041deb28328883dd88ff761b10ece4" Last-Modified:Thu, 04 Dec 2014 09:50:01 GMT Server:AmazonS3 x-amz-id-2:4fGKhKO8ZQowKIIFIMXgUo7OYEusZzSX4gXgp5cPzDyaUGcwY0h7BTAW4Xi4Gci0Pu2KXQ8= x-amz-request-id:5374BDB48F85796 

Please note that Etag is different. I made changes to it, but when I refresh the page, this is what I got. Chrome is still using my old file.

+5
source share
3 answers

It looks like your script has been aggressively cached by either Chrome itself or some other intermediate server.

If this is a js file that is called from an HTML page (which sounds the way it is), one of those methods that I saw has a page adding a parameter to the file:

 <script src="/media/main.js?v=123"></script> 

or

 <script src="/media/main.js?v=2015-01-03_01"></script> 

... which you change when JS is updated (but will be ignored by the server). Neither the browser nor the intermediate cache servers recognize it as one and will not try to use the cached version, even if the same file name is still on your S3 server.

Whenever you release, you can update this number / date / independently, ideally automatically, if the template engine has access to the application version number or identifier.

This is not the most elegant solution, but it is useful to have it if you ever find that you have used the length of continuous caching with optimization.

Obviously, this only works if you correctly uploaded the new file to S3, and S3 does send the new version of the file. Try using a command-line utility, such as curl or wget , at the javascript url to check this if you have any doubts about this.

+9
source

Invalidation method

 s3cmd -P --cf-invalidate put /home/media/main.js s3://myproject/media/main.js | | | Invalidate the uploaded filed in CloudFront. | -P, --acl-public / Store objects with ACL allowing read for anyone. 

This will invalidate the cache for the file you specified. It is also possible to invalidate your entire site, however the above command shows what I think you want in this scenario.

Note The first 1000 requests / month are free. After that, it's about $ 0.005 per file, so if you are doing a lot of invalidation requests, this can be a problem.

Query String Method / Object Method

CloudFront includes a query string (by origin) from a given URL when caching an object. This means that even if you have the exact same object that is duplicated, but the query strings are different from each other, each of them will be cached as a different object. For this to work correctly, you need to select Yes for Forward Query Strings in the CloudFront console, or set true for the QueryString element value in the DistributionConfig complex type when you use the CloudFront API.

An example :

 http://myproject/media/main.js?parameter1=a 

Summary

The most convenient way to provide a serviced object is that the current one will be declared invalid, although if you do not mind controlling the parameters of the query string, you should find it equally effective. Heading adjustments will not be as reliable as my method above; clients handle caching differently in too many ways, which is not easy to distinguish where caching problems may occur.

0
source

You need a response from S3 to include the Cache-Control header. You can set this when uploading a file:

 s3cmd --add-header="cache-control:max-age=0,no-cache" put file s3://your_bucket/ 

The absence of spaces and capital letters in my example is due to some odd signature problem with s3cmd. Your mileage may vary.

After updating the file with this command, you should get the Cache-Control header in the S3 response.

0
source

Source: https://habr.com/ru/post/1208338/


All Articles