Bandwidth Limit Using Nginx Using Amazon S3

I have large files for download (some of which are more than 5 GB) hosted on Amazon S3. My main server is Nginx. Amazon S3 does not have open access. Files are submitted with signed URLs.

Is there a way to limit bandwidth when using Amazon S3? I know that Amazon S3 is not possible, but can we use Nginx as a proxy server and make it from there?

I am trying to use an example from this link:

https://coderwall.com/p/rlguog/nginx-as-proxy-for-amazon-s3-public-private-files

This code block:

location ~* ^/proxy_private_file/(.*) {
  set $s3_bucket        'your_bucket.s3.amazonaws.com';
  set $aws_access_key   'AWSAccessKeyId=YOUR_ONLY_ACCESS_KEY';
  set $url_expires      'Expires=$arg_e';
  set $url_signature    'Signature=$arg_st';
  set $url_full         '$1$aws_access_key&$url_expires&$url_signature';

  proxy_http_version     1.1;
  proxy_set_header       Host $s3_bucket;
  proxy_set_header       Authorization '';
  proxy_hide_header      x-amz-id-2;
  proxy_hide_header      x-amz-request-id;
  proxy_hide_header      Set-Cookie;
  proxy_ignore_headers   "Set-Cookie";
  proxy_buffering        off;
  proxy_intercept_errors on;

  resolver               172.16.0.23 valid=300s;
  resolver_timeout       10s;

  proxy_pass             http://$s3_bucket$url_full;  

}

I don’t understand how can I pass the created signed URL from PHP to this Nginx Config? Therefore, I can tell Nginx to go to this signed URL as a proxy.

+4
1

. :

http- nginx. , IP.

limit_conn_zone $binary_remote_addr zone=addr:10m;

/etc/nginx/conf.d/sitename.conf . . PHP :

location ~* ^/internal_redirect/(.*?)/(.*) {
# Do not allow people to mess with this location directly
# Only internal redirects are allowed
internal;

# Location-specific logging, so we can clearly see which requests
# passing through proxy and what is happening there
access_log /var/log/nginx/internal_redirect.access.log main;
error_log /var/log/nginx/internal_redirect.error.log warn;

# Extract download url from the request
set $download_uri $2;
set $download_host $1;

# Extract the arguments from request.
# That is the Signed URL part that you require to get the file from S3 servers
if ($download_uri ~* "([^/]*$)" ) {
    set  $filename  $1;
}

# Compose download url
set $download_url $download_host/$download_uri?$args;

# Set download request headers
proxy_http_version      1.1;
proxy_set_header        Connection "";
proxy_hide_header       x-amz-id-2;
proxy_hide_header       x-amz-request-id;
proxy_hide_header       Set-Cookie;
proxy_ignore_headers    "Set-Cookie";

# Activate the proxy buffering, without it limiting bandwidth speed in proxy will not work!
proxy_buffering on;

# Buffer 512 KB data
proxy_buffers 32 16k;

proxy_intercept_errors  on;
resolver        8.8.8.8 valid=300s;
resolver_timeout        10s;

# The next two lines could be used if your storage
# backend does not support Content-Disposition
# headers used to specify file name browsers use
# when save content to the disk
proxy_hide_header Content-Disposition;
add_header Content-Disposition 'attachment; filename="$filename"';

# Do not touch local disks when proxying
# content to clients
proxy_max_temp_file_size 0;

# Limit the connection to one per IP address
limit_conn addr 1;

# Limit the bandwidth to 300 kilobytes
proxy_limit_rate 300k;

# Set logging level to info so we can see everything.
# All levels you can set: info | notice | warn | error
limit_conn_log_level info;   

# Finally download the file and send it to client
# Beware that you can shouldn't include "htttp://" or "https://"
# in proxy. Doing that will cause an "invalid port in upstream" error.
proxy_pass $download_url;
}

PHP URL- Nginx:

header( 'X-Accel-Redirect: ' . '/internal_redirect/' . $YOUR_SIGNED_URL );
0

Source: https://habr.com/ru/post/1659614/


All Articles