You can notice in the limitations and restrictions of the branches , it says:
There is no limit to the number of objects that can be stored in a bucket
My experience is that a very large number of objects in one bucket will not affect the performance of getting one object by its key (i.e. get seems to have constant complexity).
Having a very large number of objects does not affect the speed of listing a given number of objects :
List performance is not substantially affected by the total number of keys in your bucket
However, I must warn you that most of the S3 management tools that I used (for example, S3Fox) will drown out and die from a terrible slow death when trying to access a bucket with a very large number of objects. One tool that seems to do well with a very large number of objects is the S3 Browser (they have a free version and a Pro version, I am not related to them in any way).
Using "folders" or prefixes does not change any of these points (receiving and listing a given number of objects is still constant, most of the tools still fall on themselves and freeze).
source share