As I understand it, ROD Provisioned IOPS is quite expensive compared to standard I / O speed.
In the Tokyo region, the P-IOPS level is $ 0.15 / GB, $ 0.12 / IOP for standard deployment. ( Double price for deploying Multi-AZ ... )
For P-IOPS, the minimum storage required is 100 GB, IOP is 1000. Thus, the initial cost of P-IOPS is $ 135, excluding the cost of copies.
In my case, using P-IOPS costs about 100X more than using standard I / O speed.
This may be a very subjective question, but please give some opinion.
In the most optimized database for RDS P-IOPS performance will cost?
or
The AWS website provides some insight into how P-IOPS can benefit performance. Is there a real landmark?
SELF ANSWER
In addition to what zeroSkillz wrote, I did some more research. However, please note that I am not a specialist in reading databases. In addition, the benchmark and response was based on EBS.
According to an article written by Rodrigo Campos, productivity really improved significantly.
From 1000 IOPS to 2000 IOPS, read / write performance (including random read / write) doubles. From what zeroSkillz said, the standard EBS unit contains about 100 IOPS. Imagine a performance improvement when 100 IOPS reaches 1000 IOPS (which is the minimum IOPS for deploying P-IOPS).
Conclusion
According to the benchmark, performance / price seems reasonable. For critical situations in terms of performance, I think some people or companies should choose P-IOPS, even if they are 100% charged.
However, if I were a financial consultant in small or medium-sized businesses, I would simply build up (as in the CPU, memory) on my RDS instances gradually until the performance / price does not coincide with P-IOPS.