Detailed semantics of variability regarding timeliness of visibility

Consider a volatile int sharedVar . We know that JLS gives us the following guarantees:

  • each action of the writing stream w before writing the i value to sharedVar programmatically happens-before the writing action;
  • writing i to w happens-before successfully reading i from sharedVar using read stream r ;
  • successful reading of i from sharedVar by the read stream r happens-before all subsequent actions of r in the order in which the program sharedVar .

However, there is still no guarantee of a wall clock indicating when the read stream will observe the value of i . An implementation that simply never allows the read stream to see that this value still matches this contract.

I thought about this for a while, and I do not see any loopholes, but I suppose it should be. Please point out a loophole in my discourse.

+21
java volatile java-memory-model
Aug 01 2018-12-12T00:
source share
6 answers

It turns out that the answers and subsequent discussions only strengthened my initial reasoning. I now have something in the way of proof:

  • take the case where the read stream is fully executed before the write stream begins to write;
  • pay attention to the synchronization order created by this particular launch;
  • Now shift the flows in wall clock mode so that they run in parallel, but maintain the same synchronization order.

Since the Java memory model does not refer to wall clock time, this will not be an obstacle. You now have two threads running in parallel with a read stream that does not observe the actions performed by the write stream . Q.E.D.

Example 1: one record, one read stream

To make this discovery as sharp and real as possible, consider the following program:

 static volatile int sharedVar; public static void main(String[] args) throws Exception { final long startTime = System.currentTimeMillis(); final long[] aTimes = new long[5], bTimes = new long[5]; final Thread a = new Thread() { public void run() { for (int i = 0; i < 5; i++) { sharedVar = 1; aTimes[i] = System.currentTimeMillis()-startTime; briefPause(); } }}, b = new Thread() { public void run() { for (int i = 0; i < 5; i++) { bTimes[i] = sharedVar == 0? System.currentTimeMillis()-startTime : -1; briefPause(); } }}; a.start(); b.start(); a.join(); b.join(); System.out.println("Thread A wrote 1 at: " + Arrays.toString(aTimes)); System.out.println("Thread B read 0 at: " + Arrays.toString(bTimes)); } static void briefPause() { try { Thread.sleep(3); } catch (InterruptedException e) {throw new RuntimeException(e);} } 

As for JLS, this is a legal exit:

 Thread A wrote 1 at: [0, 2, 5, 7, 9] Thread B read 0 at: [0, 2, 5, 7, 9] 

Please note that I do not rely on any faulty currentTimeMillis reports. The time in question is real. However, the implementation made a choice so that all the actions of the message flow were visible only after all the actions of the read flow.

Example 2: Two streams, both read and write

Now @StephenC is reasoning, and many will agree with him what happens - before, although this is not explicitly mentioned, it still implies an order of time. Therefore, I present my second program, which demonstrates the exact extent to which this may be so.

 public static void main(String[] args) throws Exception { final long startTime = System.currentTimeMillis(); final long[] aTimes = new long[5], bTimes = new long[5]; final int[] aVals = new int[5], bVals = new int[5]; final Thread a = new Thread() { public void run() { for (int i = 0; i < 5; i++) { aVals[i] = sharedVar++; aTimes[i] = System.currentTimeMillis()-startTime; briefPause(); } }}, b = new Thread() { public void run() { for (int i = 0; i < 5; i++) { bVals[i] = sharedVar++; bTimes[i] = System.currentTimeMillis()-startTime; briefPause(); } }}; a.start(); b.start(); a.join(); b.join(); System.out.format("Thread A read %s at %s\n", Arrays.toString(aVals), Arrays.toString(aTimes)); System.out.format("Thread B read %s at %s\n", Arrays.toString(bVals), Arrays.toString(bTimes)); } 

To help understand the code, this will be a typical, real result:

 Thread A read [0, 2, 3, 6, 8] at [1, 4, 8, 11, 14] Thread B read [1, 2, 4, 5, 7] at [1, 4, 8, 11, 14] 

On the other hand, you never expected to see anything like this, but it is still legal under the JMM standard :

 Thread A read [0, 1, 2, 3, 4] at [1, 4, 8, 11, 14] Thread B read [5, 6, 7, 8, 9] at [1, 4, 8, 11, 14] 

The JVM actually has to predict what Thread A will write at time 14 in order to know what to let Thread B read at time 1. The likelihood and even feasibility of this is pretty dubious.

From this, we can determine the following, realistic freedom that the JVM implementation can realize:

The visibility of any continuous sequence of release actions by a thread can be safely carried forward until an action occurs that interrupts it.

The terms release and acquisition are defined in JLS §17.4.4 .

Corruption to this rule is that the actions of a thread that only writes and never reads anything can be delayed indefinitely without violating the relationship between events.

Cleanup Volatile Concept

The volatile actually represents two different concepts:

  • strict guarantee that actions on it will respect what is happening before the order;
  • a soft promise of better execution time for timely posting.

Note that point 2. is not defined by JLS in any way; it just arises from a common expectation. Obviously, an implementation that breaks a promise is still compatible. Over time, when we move on to massively parallel architectures, this promise may indeed prove quite flexible. Therefore, I expect that in the future the merger of the guarantee with the promise will be insufficient: depending on the requirement, we need one without the other, the other with a different taste of the other, or with any number of other combinations.

+9
Aug 01 '12 at 18:52
source share

You are partly right. I understand that this would be legal, although if thread r not involved in any other operations that took place before the relation with respect to thread w .

Therefore, there is no guarantee when in terms of wall clocks; but there is a guarantee in terms of other points of synchronization in the program.

(If this bothers you, consider that in a more fundamental sense there is no guarantee that the JVM will ever actually execute any bytecode in a timely manner. A JVM that just stalled forever will almost certainly be legal, since it is essentially impossible to provide reliable guarantee of execution time.)

+4
Aug 01 '12 at 14:50
source share

See this section (17.4.4) . You have distorted the spec a bit, which is confusing. the read / write specification for volatile variables says nothing about specific values, in particular:

  • Writing to a mutable variable (§8.3.1.4) v is synchronized with all subsequent readings v by any thread (where the next is determined in accordance with the synchronization order).

UPDATE:

As @AndrzejDoyle mentions, you could have stream r to read the deprecated value, if nothing else that this stream does after this point sets the synchronization point with stream w at some later point in execution (since then you will break specification). So yes, there is room for maneuver, but the stream r would be very limited in what it could do (for example, writing to System.out would set a later synchronization point, since most threads of the stream are synchronized).

+3
Aug 01 2018-12-12T00:
source share

I no longer believe any of the below. It all comes down to the “follow” value, which is undefined, with the exception of two references in 17.4.4, where it is tautologically “determined according to the synchronization order.”)

The only thing we really need to do is section 17.4.3:

Consistent consistency is a very strong guarantee that it will be displayed and streamlined during program execution. As part of a sequential sequential execution, there is a complete order over all individual actions (such as reading and writing) that are consistent with the order of the program, and each individual action is atomic, and is immediately visible for each stream. (highlighted by me)

I think there is such a guarantee in real time, but you should collect it from different sections of the JLS Chapter 17 .

  • In accordance with Section 17.4.5, the origin-to relationship determines when data races occur. " This does not seem to be explicitly stated, but I assume that it means that if action a occurs before another action a ', there is no data race between them.
  • According to 17.4.3: “The set of actions is sequentially coordinated if ... every read r of the variable v sees a value written in the record w in v, such that w comes up to r in order of execution. If there are no data races in the program, then all program execution will be sequentially consistent. "

If you write a volatile v variable and subsequently read it in another thread, it means that the writes occur before reading. This means that there is no data race between writing and reading, which means that they must be sequentially sequential. This means that reading r should see the value written to the record w (or the subsequent record).

+2
Aug 17 '12 at 21:22
source share

There should not be loopholes. Actually, the JVM implementation that did this is theoretically “legal”. In the same way, it is theoretically “legal” to never plan a thread whose name starts with "X" . Or implement a JVM that never starts the GC.

But in practice, JVM implementations that behaved in such ways would not find recognition.




this is actually not the case, see the specification that I refer to in my answer.

Oh yes it is!

An implementation that forever blocked a thread in reading will be technically compatible with JLS 17.4.4. "Follow-up reading" never ends.

+1
Aug 01 2018-12-12T00:
source share

I think volatile in Java is expressed in terms of "if you see A, you will also see B".

To be more explicit, Java promises when a thread reads a volatile foo variable and sees the value A, you have some guarantees as to what you will see when you read other variables later in the same thread. If the same thread that wrote A to foo also wrote B in bar (before writing A to foo ), you are guaranteed to see at least B in bar .

Of course, if you never get to A, you will also not be guaranteed to see B. And if you see B in bar , it says nothing about A's visibility in foo . In addition, the time that elapses between a stream writing A to foo and another stream seeing A to foo is not guaranteed.

+1
Dec 19 '14 at 21:49
source share