What are Monitor.Pulse and Monitor.Wait benefits?

I am new to parallel programming and try to understand the benefits of using Monitor.Pulse and Monitor.Wait.

MSDN example:

class MonitorSample { const int MAX_LOOP_TIME = 1000; Queue m_smplQueue; public MonitorSample() { m_smplQueue = new Queue(); } public void FirstThread() { int counter = 0; lock(m_smplQueue) { while(counter < MAX_LOOP_TIME) { //Wait, if the queue is busy. Monitor.Wait(m_smplQueue); //Push one element. m_smplQueue.Enqueue(counter); //Release the waiting thread. Monitor.Pulse(m_smplQueue); counter++; } } } public void SecondThread() { lock(m_smplQueue) { //Release the waiting thread. Monitor.Pulse(m_smplQueue); //Wait in the loop, while the queue is busy. //Exit on the time-out when the first thread stops. while(Monitor.Wait(m_smplQueue,1000)) { //Pop the first element. int counter = (int)m_smplQueue.Dequeue(); //Print the first element. Console.WriteLine(counter.ToString()); //Release the waiting thread. Monitor.Pulse(m_smplQueue); } } } //Return the number of queue elements. public int GetQueueCount() { return m_smplQueue.Count; } static void Main(string[] args) { //Create the MonitorSample object. MonitorSample test = new MonitorSample(); //Create the first thread. Thread tFirst = new Thread(new ThreadStart(test.FirstThread)); //Create the second thread. Thread tSecond = new Thread(new ThreadStart(test.SecondThread)); //Start threads. tFirst.Start(); tSecond.Start(); //wait to the end of the two threads tFirst.Join(); tSecond.Join(); //Print the number of queue elements. Console.WriteLine("Queue Count = " + test.GetQueueCount().ToString()); } } 

and I donโ€™t see the benefits of using Wait And Pulse instead:

  public void FirstThreadTwo() { int counter = 0; while (counter < MAX_LOOP_TIME) { lock (m_smplQueue) { m_smplQueue.Enqueue(counter); counter++; } } } public void SecondThreadTwo() { while (true) { lock (m_smplQueue) { int counter = (int)m_smplQueue.Dequeue(); Console.WriteLine(counter.ToString()); } } } 

Any help is most appreciated. thanks

+6
source share
4 answers

To describe the โ€œbenefits,โ€ the key question is โ€œover what?โ€. If you mean "preference for a hot cycle", then the use of the CPU is obvious. If you mean "preference for a sleep / snooze cycle", you can get a much faster answer ( Pulse does not have to wait so long) and use a lower processor (you did not wake up 2000 times unnecessarily).

As a rule, people mean "preference Mutex, etc.".

I try to use them widely, even unlike mutexes, reset -events, etc .; Causes:

  • they are simple and cover most of the scenarios that I need.
  • they are relatively cheap, since they do not need to fully access the OS descriptors (unlike Mutex, etc., which belong to the OS)
  • I usually use lock to handle synchronization, so there is a good chance that I already have a lock when I need to wait for something
  • it reaches my usual goal - allowing two streams to transmit a signal to each other in a controlled way.
  • I rarely need other Mutex features, etc. (e.g. between processes).
+12
source

There is a serious flaw in your snippet, SecondThreadTwo () will work a lot when it tries to call Dequeue () on an empty queue. You probably succeeded if FirstThreadTwo () executed a split second before the consumer thread, perhaps by launching it first. This is an accident that will stop working after starting these flows for a while or start them from another machine boot. This can accidentally work without errors for a long time, it is very difficult to diagnose an accidental failure.

It is not possible to write a blocking algorithm that blocks the user until the queue is empty, only with the blocking statement. A busy cycle that constantly goes in and out of the castle, but is a very poor replacement.

Writing such code is best left for guru-streaming, it is very difficult to prove that it works in all cases. Not just the absence of failures such as this or the threadlike races. But the general suitability of the algorithm, which avoids deadlocks, storm and stream convoys. In the .NET world, gurus are Jeffrey Richter and Joe Duffy. They eat shutters for breakfast, both in their books and in their blogs and magazine articles. Theft of their code is expected and accepted. And partly included in the .NET framework with additions to the System.Collections.Concurrent namespace.

+4
source

Improving the efficiency of using Monitor.Pulse / Wait, as you might have guessed. This is a relatively expensive lock acquisition operation. Using Monitor.Wait , your thread will sleep until some other thread asks for your thread using "Monitor.Pulse".

You will see the difference in the TaskManager, because one core of the processor will be bound, even if there is nothing in the queue.

+3
source

The advantages of Pulse and Wait are that they can be used as building blocks for all other synchronization mechanisms, including mutexes, events, barriers, etc. There are things you can do with Pulse and Wait that cannot be done with any other synchronization mechanism in BCL.

All interesting things happen inside the Wait method. Wait will exit the critical section and put the thread in the WaitSleepJoin state, placing it in the wait queue. After a Pulse call, the next thread in the wait queue goes into the ready queue. After the thread switches to Running , it will again enable the critical section. This is important to repeat in a different way. Wait will release the lock and reload it atomically . No other synchronization mechanism has this feature.

The best way to imagine this is to try to replicate the behavior with a different strategy, and then see what might go wrong. Let's try this excerise with ManualResetEvent , because the Set and WaitOne seem similar. Our first attempt may look like this.

 void FirstThread() { lock (mre) { // Do stuff. mre.Set(); // Do stuff. } } void SecondThread() { lock (mre) { // Do stuff. while (!CheckSomeCondition()) { mre.WaitOne(); } // Do stuff. } } 

It should be easy to see that the code will be blocked. So what happens if we try this naive fix?

 void FirstThread() { lock (mre) { // Do stuff. mre.Set(); // Do stuff. } } void SecondThread() { lock (mre) { // Do stuff. } while (!CheckSomeCondition()) { mre.WaitOne(); } lock (mre) { // Do stuff. } } 

You see what could go wrong here? Since we did not atomize the re-enable of the lock after checking the wait state, another thread may enter the state and result in invalidation. In other words, another thread might do something that causes CheckSomeCondition to return false again until the next lock is re-locked. This can lead to many strange problems if your second block of code requires the condition to be true .

0
source

Source: https://habr.com/ru/post/891983/


All Articles