Why does this program not go into an infinite loop in the absence of volatility of the condition Boolean?

I wanted to understand exactly when I needed to declare the variable as mutable. To do this, I wrote a small program and expected it to go into an infinite loop due to the lack of volatility of the condition variable. It did not enter into an infinite loop and worked perfectly without a volatile keyword.

Two questions:

  • What should I change in the code list below - so that it absolutely requires the use of volatile?

  • Is the C # compiler smart enough to handle a variable as volatile - if it sees that the variable is being accessed from another thread?

The above raised more questions for me :)

a. Is change only a hint?

b. When should a variable be declared mutable in the context of multithreading?

with. Should all member variables be declared volatile for a class with a stream? Is this too much?

Code listing (volatility, not thread safety) -

class Program { static void Main(string[] args) { VolatileDemo demo = new VolatileDemo(); demo.Start(); Console.WriteLine("Completed"); Console.Read(); } } public class VolatileDemo { public VolatileDemo() { } public void Start() { var thread = new Thread(() => { Thread.Sleep(5000); stop = true; }); thread.Start(); while (stop == false) Console.WriteLine("Waiting For Stop Event"); } private bool stop = false; } 

Thanks.

+4
source share
5 answers

Try rewriting it like this:

  public void Start() { var thread = new Thread(() => { Thread.Sleep(5000); stop = true; }); thread.Start(); bool unused = false; while (stop == false) unused = !unused; // fake work to prevent optimization } 

And make sure you are working in Release mode, not Debug mode. In release mode, optimizations are applied that actually lead to code failure in the absence of volatile .

Edit : A little about volatile :

We all know that in the life cycle of a program there are two different objects that can apply optimizations in the form of caching variables and / or reordering commands: a compiler and a processor.

This means that there may even be a big difference between how you wrote your code and how it actually executes, because the instructions can be reordered in relation to each other, or reading can be cached in what the compiler perceives as “speed improvement” .

This is good in most cases, but sometimes (especially in the context of multithreading) it can cause problems, as seen in this example. To allow the programmer to manually prevent this optimization, memory locks were introduced, which are special instructions whose role is to prevent reordering instructions (just reading, just writing, or both) with respect to the enclosure itself, as well as forcing invalid values ​​in CPU caches, so they need to be re-read every time (this is what we want in the above scenario).

Although you can specify a complete fence that affects all variables through Thread.MemoryBarrier() , it almost always overloads if you need only one variable that needs to be affected. Thus, to ensure that one variable is always up-to-date on threads, you can use volatile to enter stubs for reading and writing only for this variable.

+2
source

First, Joe Duffy says that “volatile is evil” is good enough for me.

If you want to think about variability, you have to think about memory considerations and optimizations — the compiler, the jitter, and the processor.

On x86, entries are fetch entries, which means your background thread will clear true to memory.

So, you are looking for caching false in a loop predicate. The compiler or jitter can optimize the predicate and evaluate it only once, but, I think, it does not do this to read the class field. The CPU will not cache false because you call Console.WriteLine , which includes the fence.

This code requires volatile and will never be completed without Volatile.Read :

 static void Run() { bool stop = false; Task.Factory.StartNew( () => { Thread.Sleep( 1000 ); stop = true; } ); while ( !stop ) ; } 
+4
source

I am not an expert in C # concurrency, but AFAIK your expectation is wrong. Changing an immutable variable from another thread does not mean that the change will never become visible to other threads. Just that there is no guarantee when (and if) it will happen. In your case, this happened (how many times have you run the btw program?), Perhaps due to the fact that the finish thread discarded its changes in accordance with the @Russell comment. But in a real life setting, which includes a more complex program stream, there are more variables, more threads - the update can happen in 5 seconds or maybe once in thousands of cases, it may not happen at all.

Thus, running your program once - or even a million times - while the absence of any problems provides only statistical, not absolute evidence. "Lack of evidence does not indicate absence . "

+3
source

The volatile keyword is a message to the compiler not to optimize a single thread for this variable. This means that this variable can be modified by multiple threads. This makes the value of the variable the most recent while reading.

The piece of code you pasted here is a good example of using the volatile keyword. Not surprisingly, this code works without the volatile keyword. However, this can lead to more unpredictable consequences when you start more threads and perform more complex actions on the flag value.

You declare volatile only for variables that can be modified by multiple threads. I do not know exactly how this happens in C #, but I believe that I can not use volatile for those variables that are changed using read-write operations (for example, increments). Volatile does not use locks when changing a value. Thus, setting the flag to volatile (for example, above) is okay, the variable increment is not okay - then you should use the synchronization / lock mechanism.

+1
source

When the background thread sets true to a member variable, there is a fence, and the value is written to memory, and the other processor cache is updated or cleared from this address.

Calling the Console.WriteLine function is a fence of full memory, and its semantics, perhaps something (without compiler optimization) will require stop not be cached.

However, if you delete the Console.WriteLine call, I find that the function is still stopping.

I believe that the compiler, in the absence of optimizations, the compiler does not cache anything computed from global memory. The volatile keyword is an instruction not to even think about caching any expression that includes this variable in the / JIT compiler.

This code still stops (at least for me, I use Mono):

 public void Start() { stop = false; var thread = new Thread(() => { while(true) { Thread.Sleep(50); stop = !stop; } }); thread.Start(); while ( !(stop ^ stop) ); } 

This shows that this is not a while statement that prevents caching, since it shows that the variable is not cached even within the same expression.

This optimization looks sensitive to a memory model, which is platform dependent, which would be done in a JIT compiler; which would not have time (or intelligence) to / see / use the variable in another thread and prevent caching for this reason.

Perhaps Microsoft does not believe that programmers can know when to use volatile and decide to strip them of responsibility, and then Mono followed suit.

+1
source

Source: https://habr.com/ru/post/1389035/


All Articles