If you want to see what level of isolation will make the sample code work in its current form, and not as the best way to solve the problem discussed in the sample code, you will need guarantees at least REPEATABLE READ.
Databases that use strict two-phase locking (S2PL) for concurrency allow READ COMMITTED transactions to drop shared locks at the end of each statement, or even earlier, therefore, between time transaction A, the availability and time it declares places are checked, someone else can go through transaction B and read again without causing a single transaction. Transaction A may temporarily block transaction B, but both will be updated and you may be resold.
In databases that use concurrency version control (MVCC) for concurrency, reading does not block writes, and records do not block reading. In READ COMMITTED, each statement uses a new database snapshot based on what is committed, and at least in some (I know this is true in PostgreSQL), simultaneous writing is allowed without errors. Thus, even if transaction A was in the process of updating the sold account or did not complete it, transaction B will see the old counter and continue updating. When he tried to perform the update, he could block waiting for the previous update, but after that he would find a new version of the line, check if it meets the selection criteria, update if it does, and ignore the line if not, and proceed to commit without errors. So again, you are oversold.
I assume that the answers are Q2 if you decide to use transaction isolation. The problem can be resolved at a lower isolation level by modifying the example code to accept explicit locks, but this usually results in a larger lock using an isolation level that is strict enough to handle it automatically.
source share