REST service and race conditions

Imagine the problem: I have a REST service that is implemented using Java / MySQL / Spring and HTTP / JSON technologies. REST clients are mobile applications. Thus, it is possible that someone will decompile the code and get the REST service API. (Yes, the code is confusing, etc., but in any case).

Problem: There is a POST method to send money to another user of the application. I am worried that someone might get an API, write a bot and make this POST request 500 or 5000 or even 50,000 times per second. As a result, he can send more money than actually, because if 1000 requests are processed at the same time, checking the balance can be successful for all 1000 requests, however, for example, 50 requests can be enough for the actual amount of money in the account.

So basically this is more like the standard race condition with multiple threads. The problem is that I have several servers, and they are in no way connected to each other. Thus, 300 requests can come to server A, 300 requests can go to server B, and vacation requests can go to server C.

The best idea that I have is to use something like "SELECT ... FOR UPDATE" and synchronize at the database level. However, I would like to consider other solutions.

Any ideas or suggestions?

+6
source share
2 answers

You have several options:

  • Rely on implementing an ACID database (MySQL in your case). Assuming you are using the InnoDB engine, you need to select the correct transaction isolation level ( SET TRANSACTION syntax ) in combination with the mechanism of the right locking mechanism ( SELECT ... FOR UPDATE and SELECT ... LOCK IN SHARE MODE Locking Reads ). You need to understand these concepts well in order to make the right choice. It is possible that just using the right isolation level will already prevent the race condition even without blocking. The downside is that you refuse to be consistent for scalability and linking your application to the RDBMS database, so it will be harder for you to switch to NoSQL, for example.

  • Lay out your back to the web tier and service tier (option suggested by atk in the comments). This allows you to scale instances of the web tier independently, while maintaining one instance of the service tier. Having one instance of a service level allows you to use Java synchronization mechanisms such as synchronised blocks or ReadWriteLock . Although this solution will work, I would not recommend it, as it reduces the scalability of your service level.

  • This is an improvement on the previous version. You can use the Distributed Lock Manager instead of the built-in Java synchronization mechanisms. This will allow you to independently scale your web tier and service level.

+2
source

For mission-critical applications, it is best to have several levels of locking mechanisms.

"SELECT ... FOR UPDATE" is a good way to do this, but they are quite expensive, and when you try to trick it with Charles, you will see that your top API stack will suffer and that a simple mechanism will greatly distort your infrastructure, like DDoS event.

Deploy it at the Load Balancer / Proxy level to reduce the N-number of requests at specific intervals from a single IP address.

Then apply a general cache level lock, where all of your blocks are locked for certain keys, depending on which critical transaction you want to block. For example, you can use the Redis GETSET or INCR functions to atomize a flag before entering a critical code path. Reject anything else to avoid having those bad actors hold onto the processor / memory.

You can also implement things like the APC cache (before removing the Redis / Memcache cluster) to make a similar lock for each box. This is faster because there are no network delays.

The above need to use "SELECT ... FOR UPDATE"

0
source

Source: https://habr.com/ru/post/987317/


All Articles