Has anyone ever come to read or write the top for an Amazon S3 bucket?

Are S3 scaling limitations known? Has anyone ever had so many simultaneous readings or records that the bucket began to return errors? I'm a little more interested in writing than reading, because S3 is likely to be optimized for reading.

+6
source share
1 answer

Eric's comment summarizes it already at a conceptual level, as described in the FAQ. What happens if traffic from my application suddenly jumps?

Amazon S3 was designed from the ground up to handle traffic for any Internet application. [...] The large-scale scale of Amazon S3s allows us to evenly distribute the load, so that a separate application is not affected by traffic.

Of course, you still need to take into account possible problems and configure [your] application for repeated SlowDown errors (see Amazon S3 Error Best Practices ):

As with any distributed system , S3 has security mechanisms that detect intentional or unintentional overconsumption of resources and accordingly. SlowDown errors can occur when a high request rate triggers one of these mechanisms. . Reducing the bid rate will reduce or eliminate errors of this type. In general, most users will not regularly experience these errors; however , if you would like more information or to experience big or unexpected SlowDown, please post to our Amazon S3 developer forum http://developer.amazonwebservices.com/connect/forum.jspa?forumID=24 or register to support AWS Premium http://aws.amazon.com/premiumsupport/ . [emphasis mine]

Although rare, these slowdowns occur, of course, - here is the response from the AWS team , illustrating the problem (quite dated):

Amazon S3 will return this error when the request speed is high enough that request service will lead to a deterioration in the service of other customers. This error rarely works. If you really get this, you should exponentially retreat . If this error occurs, the system resources will be rebalanced / distributed in different ways to better support a higher level of requests. As a result, the period of time during which this error will be thrown should be relatively short. [emphasis mine]

Your assumption about reading and writing optimization is also confirmed there:

The threshold at which this error is triggered changes and will partially depend on the type and pattern of the request. In general, you will be able to achieve higher rates with points against opponents and with a large number a small number of keys versus a large number falls on a large number of keys. When you receive or put a large number of keys, you can reach higher rps if the keys are in alphanumeric order compared to the random / hashed order.

+4
source

Source: https://habr.com/ru/post/907200/


All Articles