DynamoDB Provisioned Throughput is based on a specific unit size and number of recordable elements:
In DynamoDB, you specify bandwidth requirements in units of capacity. Use the following guidelines to determine your available bandwidth:
- A single read block is a strictly consistent read per second or two, ultimately, a consistent read per second for items up to 4 KB in size. If you need to read an item larger than 4 KB, DynamoDB will need to consume additional reading units. The total number of units of reading power required depends on the size of the element and whether sequential or strictly consistent reading is ultimately required.
- A single recording unit represents one record per second for items up to 1 KB in size. If you need to write an item larger than 1 KB, DynamoDB will need to consume additional units of recording power. The total number of units required for recording power depends on the size of the element.
Therefore, when determining the desired capacity, you need to know how many objects you want to read and write per second, and the size of these elements.
Instead of looking for a specific GB / s, you should look for a certain number of elements that you want to read / write per second . These are the features that your application will require to ensure performance.
There are also some DynamoDB restrictions that will apply, but they can be changed upon request:
- West United States (North Virginia) Region:
- The table shows 40,000 units of read capacity and 40,000 units of write power.
- At the expense of 80,000 units of reading capacity and 80,000 units of recording power.
- All other regions:
- At the table - 10,000 units of read power and 10,000 units of write power.
- At the expense of 20,000 units of reading capacity and 20,000 units of recording power.
At 40,000 units of read capacity x 4 KB x 2 (ultimately serial) = 320 MB / s
If my calculations are correct, your requirements are 100 times higher than this amount, so it would turn out that DynamoDB is not a suitable solution for such a high throughput.
Are the speeds adjusted?
The question then becomes, how do you generate so much data per second. Full duplex 10GFC fiber operates at a speed of 2550 MB / s, therefore, to transfer such data, you will need several fiber connections if they will enter / exit the AWS cloud.
Even 10Gb Ethernet only provides 10 Gb / s, so it takes 28 seconds to transfer 32 GB - and to transfer one second of data!
Bottom line: Data requirements are very high. Are you sure they are realistic?
source share