Will I need to sign each segment file URL separately?
If a player requests directly from S3, then yes. So this is probably not the ideal approach.
One option is CloudFront in front of the bucket. CloudFront can be configured with an Origin access ID, which allows it to sign requests and send them to S3 so that it can receive S3 private objects on behalf of an authorized user, and CloudFront supports both signed URLs (using a different algorithm than S3, with two important differences which I will explain below) or signed cookies . Signed requests and cookies in CloudFront work very similar to each other, the important difference being that you can set cookies once and then automatically use the browser for each subsequent request, avoiding the need to sign separate URLs. (AHA).
For both signed URLs and signed cookies in CloudFront, you get two additional features that are not easy to do with S3 if you use a custom policy:
A CloudFront signature policy can allow the template in transit so that you can allow access to any file, say /media/Video1/* before the signature expires. S3-signed URLs do not support wildcards in any form - an S3 URL can only be valid for one object.
While the CloudFront distribution is configured only for IPv4, you can bind the signature to a specific IP address of the client, allowing access with that signature only from one IP address (CloudFront now supports IPv6 as an additional feature, but it is not compatible with this option at present) . This is a bit aggressive and probably undesirable with a mobile user base that will switch the source addresses when switching from the provider's network to Wi-Fi and vice versa.
Signed URLs still need to be generated for all content links, but you can only generate and sign the URL once and then reuse the signature by simply rewriting the URL for each file, making this option computationally cheaper .. but still bulky. Signed cookies, on the other hand, should βjust workβ for any suitable object.
Of course, adding CloudFront should also improve performance by caching and shortening the Internet route, as the request will move to the AWS managed network closer to the browser than usual for requests directed to S3. When using CloudFront, requests from the browser are sent depending on which of the more than 60 global βextreme locationsβ is considered closest to the browser making the request. CloudFront can serve the same cached object for different users with different URLs or cookies, unless, of course, sigs or cookies are valid.
In order to use CloudFront's signed cookies, at least part of your application - the part that sets the cookie, must be "behind" the same CloudFront distribution that points to the bucket. This is done by declaring your application as an additional source for distributing and creating cache behavior for a specific path template, which, upon request, is sent by CloudFront to your application, which can then respond with the corresponding Set-Cookie: headers.
I am not affiliated with AWS, so Iβm not mistaken as a βstepβ, just expecting the following question: CloudFront + S3 is priced so that the cost difference compared to using only S3 is usually insignificant - S3 does not charge for bandwidth on request objects through CloudFront, and the CloudFront bandwidth fee in some cases is slightly lower than the direct S3 fee. Although this seems controversial, it makes sense that AWS will structure pricing in such a way as to stimulate the distribution of requests through its network, rather than focusing them all on one S3 region.
Please note that none of the mechanisms listed above or below can be fully protected from unauthorized βsharingβ, since authentication information is necessarily available to the browser and, therefore, to the user, depending on the user's experience ... but both approaches seem more than enough to keep honest users honest, and thatβs all you can ever hope to do. Since signatures on signed URLs and cookies have an expiration time, the duration of sharing is limited, and you can identify these patterns by analyzing CloudFront log and respond accordingly. No matter what approach you take, keep in mind the importance of staying on top of your magazines.
A reverse proxy is also a good idea, perhaps easy to implement and should be perfectly acceptable without any additional transport charges or bandwidth problems if the EC2 machines running the proxy server are in the same AWS area as the bucket , and proxy is based on reliable, efficient code similar to that found in Nginx or HAProxy.
You do not need to sign anything in this environment, because you can configure the bucket to allow the reverse proxy to access private objects, since it has a fixed IP address.
In the bucket policy, you do this by granting βanonymousβ users the s3:getObject privilege only if their source IPv4 address matches the IP address of one of the proxies. The proxy requests objects anonymously (without the need for a signature) from S3 on behalf of authorized users. This requires that you do not use the S3 VPC endpoint, but instead provide the Elastic proxy with an IP address or put it behind a NAT gateway or NAT instance and have S3 trust in the NAT source IP address. If you are using an S3 VPC endpoint, it should be possible to allow S3 to trust the request simply because it crossed the S3 VPC endpoint, although I have not tested this. (S3 VPC endpoints are optional; if you havenβt explicitly configured it, you donβt have one, and you probably donβt need it).
Your third option seems weak if I understand it correctly. An authorized but malicious user receives a key that can be shared all day.