Java, intermittent and memory interference in x86 architecture

This is rather a theoretical question. I'm not sure if all concepts, compiler behavior, etc. They are uptodate and are still in use, but I would like to receive confirmation if I understand correctly some of the concepts that I am trying to learn.

Language is Java.

From what I have understood so far, on the X86 architecture, StoreLoad barriers (despite the exact processor instructions used to implement them) are placed after writing Volatile to make them visible to subsequent Volatile Reads in other threads (since x86 doesn ' t ensures that new readers always see older posts) (link http://shipilev.net/blog/2014/on-the-fence-with-dependencies/ )

Now from here ( http://jpbempel.blogspot.it/2013/05/volatile-and-memory-barriers.html ) I see that:

public class TestJIT { private volatile static int field1; private static int field2; private static int field3; private static int field4; private static int field5; private volatile static int field6; private static void assign(int i) { field1 = i << 1; // volatile field2 = i << 2; field3 = i << 3; field4 = i << 4; field5 = i << 5; field6 = i << 6; // volatile. } public static void main(String[] args) throws Exception { for (int i = 0; i < 10000; i++) { assign(i); } Thread.sleep(1000); } } 

the resulting assembly has StoreLoad only after assigning field 6, and not after assigning field1, which, however, is also unstable.

My questions:

1) Does what I have written so far make sense? Or am I completely misinterpreting something?

2) Why does the compiler omit the StoreLoad function after field input 1? Is this an optimization? But does he have some flaws? For example, another thread starting after assigning field1 can still read the old value for field1, even if it was really changed?

+5
source share
3 answers

1) Does what I have written so far make sense? Or am I completely misinterpreting something?

I think everything is right with you.

2) Why does the compiler omit the StoreLoad function after field input 1? Is this an optimization? But does he have some flaws?

Yes, this is optimization, but it is quite difficult to do it right.

The Doug Lea JMM Cookbook actually shows an example of recommended barriers in the case of two consecutive volatile stores and there StoreLoad after each of them there is a StoreStore (x86 no-op) between the two stores and StoreLoad only after the second. However, the Cookbook notes that a related analysis may be fair.

The compiler should be able to prove that volatile reading cannot be performed in synchronization order between write and field1 and write to field6 . I am not sure if this is feasible (by the current HotSpot JIT) if TestJIT been slightly modified so that a comparable number of volatile downloads are executed in another thread at the same time.

For example, another thread starting after assigning field1 can still read the old value for field1, even if it was actually changed?

The volatile load should not be allowed to follow the volatile storage in synchronization order. So, as mentioned above, I think the JIT gets away with it because it does not see any volatile loads.

Refresh

Changed the JMM Cookbook example information since kRs indicated that I made a mistake StoreStore for StoreLoad . The essence of the answer has not changed at all.

+4
source

Why does the compiler omit StoreLoad after field1?

In order to be volatile, only the first load and the last store are required.

Is this an optimization?

If this happens, this is the most likely cause.

But does he have some flaws?

Only you rely on two barriers. those. you need to see that field1 changed before field6 changed more often than it happened by accident.

can still read the old value for field1, even if it was really changed?

yes, although you will not have a way to determine that this happened, but you want to see the new value, even if other fields have not yet been set.

0
source

To answer question (1), you are right with everything that you said about memory barriers, etc. (although the explanation is incomplete). Memory contamination ensures the ordering of ALL loads / storages in front of it, and not just volatile ones). However, iffy code example.

Thread operations should streamline them. Using the volatile operation at the beginning of your code in this way is redundant because it does not give any meaningful assurances regarding the order (I mean that it provides assurances, they are just very fragile).

Consider this example:

 public void thread1() { //no assurances about ordering counter1 = someVal; //some non-volatile store counter2 = someVal; //some non-volatile store } public void thread2() { flag += 1; //some volatile operation System.out.println(counter1); System.out.println(counter2); } 

No matter what we do on thread2, there is absolutely no assurance that what is happening in thread1 is free to do almost anything it wants. Even if you use volatile operations on thread1, the ordering will not be visible on thread2.

To fix this, we need to order entries on thread1 with a memory barrier (also called unstable operation);

 public void thread1() { counter1 = someVal; //some non-volatile store counter2 = someVal; //some non-volatile store //now we use a volatile write //this ensures the order of our writes flag = true; //volatile operation } public void thread2() { //thread1 has already ordered the memory operations (behind the flag) //therefore we don't actually need another memory barrier here if (flag) { //both counters have the right value now } } 

In this example, the ordering is handled by thread1, but depends on the state of the flag. Thus, we only need to check the state of the flag , but you do not need another memory barrier for this reading (otherwise you need to check the volatile field, it just does not need the memory barrier).

So, to answer your question (2): The JVM expects you to use a mutable operation to streamline previous operations on a given thread. The reason your first unstable job doesn't have a memory barrier is because it doesn't affect whether your code will work (there may be situations where it's possible, but I can't think of anything, not to mention about any where it would be nice).

0
source

Source: https://habr.com/ru/post/1247777/


All Articles