It turns out that the answers and subsequent discussions only strengthened my initial reasoning. I now have something in the way of proof:
- take the case where the read stream is fully executed before the write stream begins to write;
- pay attention to the synchronization order created by this particular launch;
- Now shift the flows in wall clock mode so that they run in parallel, but maintain the same synchronization order.
Since the Java memory model does not refer to wall clock time, this will not be an obstacle. You now have two threads running in parallel with a read stream that does not observe the actions performed by the write stream . Q.E.D.
Example 1: one record, one read stream
To make this discovery as sharp and real as possible, consider the following program:
static volatile int sharedVar; public static void main(String[] args) throws Exception { final long startTime = System.currentTimeMillis(); final long[] aTimes = new long[5], bTimes = new long[5]; final Thread a = new Thread() { public void run() { for (int i = 0; i < 5; i++) { sharedVar = 1; aTimes[i] = System.currentTimeMillis()-startTime; briefPause(); } }}, b = new Thread() { public void run() { for (int i = 0; i < 5; i++) { bTimes[i] = sharedVar == 0? System.currentTimeMillis()-startTime : -1; briefPause(); } }}; a.start(); b.start(); a.join(); b.join(); System.out.println("Thread A wrote 1 at: " + Arrays.toString(aTimes)); System.out.println("Thread B read 0 at: " + Arrays.toString(bTimes)); } static void briefPause() { try { Thread.sleep(3); } catch (InterruptedException e) {throw new RuntimeException(e);} }
As for JLS, this is a legal exit:
Thread A wrote 1 at: [0, 2, 5, 7, 9] Thread B read 0 at: [0, 2, 5, 7, 9]
Please note that I do not rely on any faulty currentTimeMillis reports. The time in question is real. However, the implementation made a choice so that all the actions of the message flow were visible only after all the actions of the read flow.
Example 2: Two streams, both read and write
Now @StephenC is reasoning, and many will agree with him what happens - before, although this is not explicitly mentioned, it still implies an order of time. Therefore, I present my second program, which demonstrates the exact extent to which this may be so.
public static void main(String[] args) throws Exception { final long startTime = System.currentTimeMillis(); final long[] aTimes = new long[5], bTimes = new long[5]; final int[] aVals = new int[5], bVals = new int[5]; final Thread a = new Thread() { public void run() { for (int i = 0; i < 5; i++) { aVals[i] = sharedVar++; aTimes[i] = System.currentTimeMillis()-startTime; briefPause(); } }}, b = new Thread() { public void run() { for (int i = 0; i < 5; i++) { bVals[i] = sharedVar++; bTimes[i] = System.currentTimeMillis()-startTime; briefPause(); } }}; a.start(); b.start(); a.join(); b.join(); System.out.format("Thread A read %s at %s\n", Arrays.toString(aVals), Arrays.toString(aTimes)); System.out.format("Thread B read %s at %s\n", Arrays.toString(bVals), Arrays.toString(bTimes)); }
To help understand the code, this will be a typical, real result:
Thread A read [0, 2, 3, 6, 8] at [1, 4, 8, 11, 14] Thread B read [1, 2, 4, 5, 7] at [1, 4, 8, 11, 14]
On the other hand, you never expected to see anything like this, but it is still legal under the JMM standard :
Thread A read [0, 1, 2, 3, 4] at [1, 4, 8, 11, 14] Thread B read [5, 6, 7, 8, 9] at [1, 4, 8, 11, 14]
The JVM actually has to predict what Thread A will write at time 14 in order to know what to let Thread B read at time 1. The likelihood and even feasibility of this is pretty dubious.
From this, we can determine the following, realistic freedom that the JVM implementation can realize:
The visibility of any continuous sequence of release actions by a thread can be safely carried forward until an action occurs that interrupts it.
The terms release and acquisition are defined in JLS §17.4.4 .
Corruption to this rule is that the actions of a thread that only writes and never reads anything can be delayed indefinitely without violating the relationship between events.
Cleanup Volatile Concept
The volatile actually represents two different concepts:
- strict guarantee that actions on it will respect what is happening before the order;
- a soft promise of better execution time for timely posting.
Note that point 2. is not defined by JLS in any way; it just arises from a common expectation. Obviously, an implementation that breaks a promise is still compatible. Over time, when we move on to massively parallel architectures, this promise may indeed prove quite flexible. Therefore, I expect that in the future the merger of the guarantee with the promise will be insufficient: depending on the requirement, we need one without the other, the other with a different taste of the other, or with any number of other combinations.