Is there a connection between object size and performance locking in Java?

The famous Java Concurrency in Practice, Section 2.4, states that the built-in lock approach, unlike explicit locks, was a poor design decision because it is confusing, as well as "... it forces JVM developers to compromise between size and lock objects" . Can someone explain how the effect of locking objects affects performance?

+6
source share
2 answers

Well, since each object can be blocked, this means that each object must have enough space to store all the information we need when locking.

This is pretty unattractive, because the vast, vast majority of objects will never be blocked, so we spend a lot of space. Thus, in practice, Hotspot solves this by using 2 bits to record the state of the object and reuse the rest of the object's header, depending on these two bits.

Then there is all the biased / unbiased blocking material .. well, you can start reading about it here . The Hotspot documentation is not what I would call extensive, but the title and headers of objects are better represented than most others. But in doubt: read the source code.

PS: We have a similar problem with each object's own hash code. "Just use a memory address" is not very good if your GC moves objects around. (But despite the lock, there is no real alternative - if we want this functionality)

+5
source

The most effective locks use their own word size, for example. 32-bit fields. However, you do not want to add 4 bytes to each object, so instead of AFAIK, 1 bit is used, however setting this bit is more expensive than setting the word size.

+2
source

Source: https://habr.com/ru/post/904676/


All Articles