If you look at locking readership writers, you will find that it is a completely different type of animal than locking the database referenced by MySQL when you use the phrase "row-level locking."
A read-write lock protects access to shared memory and is therefore extremely short-lived (in the order of microseconds). Since MongoDB operations are only atomic at the document level, these locks (in traditional databases they are sometimes referred to as latches and are used to protect access to the index ) are retained only as long as one document is taken for updating in memory.
The usual "database lock" usually exists until the transaction that is running has been completed or rolled back. Since RDBMS transactions can span multiple operations in many tables, these locks are usually much more durable and therefore must be much more granular to allow other concurrency work.
doesn't that mean, theoretically, mongodb is 2 levels slower than MySQL for concurrent access?
No, this is true, and depending on your exact workload it can be much faster or slightly faster or slower - it all depends on the types of operations that you do, on your available physical resources, on the structure of your data, as well as the needs of your application .
Applications that write a lot of data to a database in MongoDB are typically limited primarily by the available disk I / O bandwidth. Only when the available disk bandwidth exceeds the number of records made by the application in the database will you see that concurrency will become a factor with MongoDB. With relational databases, due to the longer lock life, concurrency can become a factor much earlier, even with a relatively small amount of data being written.
source share