AWS RDS Provided by IOPS Really Worth It?

As I understand it, ROD Provisioned IOPS is quite expensive compared to standard I / O speed.

In the Tokyo region, the P-IOPS level is $ 0.15 / GB, $ 0.12 / IOP for standard deployment. ( Double price for deploying Multi-AZ ... )

For P-IOPS, the minimum storage required is 100 GB, IOP is 1000. Thus, the initial cost of P-IOPS is $ 135, excluding the cost of copies.

In my case, using P-IOPS costs about 100X more than using standard I / O speed.

This may be a very subjective question, but please give some opinion.

In the most optimized database for RDS P-IOPS performance will cost?

or

The AWS website provides some insight into how P-IOPS can benefit performance. Is there a real landmark?

SELF ANSWER

In addition to what zeroSkillz wrote, I did some more research. However, please note that I am not a specialist in reading databases. In addition, the benchmark and response was based on EBS.

According to an article written by Rodrigo Campos, productivity really improved significantly.

From 1000 IOPS to 2000 IOPS, read / write performance (including random read / write) doubles. From what zeroSkillz said, the standard EBS unit contains about 100 IOPS. Imagine a performance improvement when 100 IOPS reaches 1000 IOPS (which is the minimum IOPS for deploying P-IOPS).

Conclusion

According to the benchmark, performance / price seems reasonable. For critical situations in terms of performance, I think some people or companies should choose P-IOPS, even if they are 100% charged.

However, if I were a financial consultant in small or medium-sized businesses, I would simply build up (as in the CPU, memory) on my RDS instances gradually until the performance / price does not coincide with P-IOPS.

+45
performance amazon-web-services amazon-rds
Sep 13 '13 at 1:57
source share
2 answers

Ok This is a bad question because it does not mention the size of the allocated storage or any other configuration details. We use RDS, and it has its pros and cons. First, you cannot use an ephemeral storage device with RDS. You cannot even access the storage device directly when using the RDS service.

It is assumed that the data carrier for RDS is based on the Amazon EBS variant. The performance for standard I / O depends on the size of the volume, and there are many sources that say that above 100 GB of memory, they begin to "separate" the volumes of EBS. This provides better access to data on average cases both in reading and in writing.

We are currently running around 300 GB of storage space and can get 2 KB of IOP input and 1 KB of IOP about 85% of the time in a few hours. We use a datadog to register this so that we can see. We saw that bursts of up to 4 thousand were written by the IOP, but nothing like that could stand it.

The main symptom that we see from the application side is a violation of the lock if IOPS is not enough for recording. The amount and frequency that you get from them in application logs will give you symptoms for exhausting standard RDS IOPS. You can also use a service like datadog to monitor IOPS.

The problem with the IOPS provided is that they assume that stationary read / write volumes will be cost-effective. This is almost never a realistic use case and is the reason Amazon launched the cloud-based services to fix it. The only guarantee you get with P-IOPS is that you will get maximum throughput. If you do not use it, you pay for it.

If you're fine with working replicas, we recommend that you use the read-only replica as an NON-RDS instance and put it in a regular EC2 instance. You can better read-IOPS at a much lower cost by managing the replica yourself. We even set up replicas outside AWS using stunnel and set SSDs as the primary block device, and we get ridiculous read speeds for our reporting systems - literally 100 times faster than we get from RDS.

I hope this helps to give some real world details. In short, in my opinion, if you do not need to provide a certain level of bandwidth (or your application will not work) on an ongoing basis (or at any given point), then there are better alternatives to provided-IOPS, including the separation of reading and writing from reading -replicas memcache etc.

+19
Mar 17 '14 at 12:36
source share

So, I just called with an Amazon system engineer, and he had interesting information related to this issue. (i.e. this is second-hand knowledge.)

EBS building blocks can handle multiple traffic well, but ultimately it will decrease to 100 iops. Several alternatives proposed by this engineer have been proposed.

  • some customers use several small EBS blocks and strip them. This will improve IOPS and be most cost effective. You do not need to worry about mirroring because EBS is mirrored behind the scenes.

  • some clients use ephemeral storage in an EC2 instance. (or an instance of RDS) and have several slaves to ensure "durability." Ephemeral storage is local storage and much faster than EBS. You can even use EC2 instances provided by SSD.

  • some clients will configure the wizard to use prepared IOPS or ephemeral SSD storage, and then use the standard EBS storage for the slave (s). Expected performance is good, but fault tolerance performance is decreasing (but still available).

In any case, if you decide to use any of these strategies, I would reprofile using Amazon to make sure that I have not forgotten about any important steps. As I said, this is second-hand knowledge.

+26
Oct 08 '13 at 19:30
source share



All Articles