Why lock instead of looping?

What are some reasons why writing the following code snippet is considered bad practice?

while (someList.isEmpty()) { try { Thread.currentThread().sleep(100); } catch (Exception e) {} } // Do something to the list as soon as some thread adds an element to it. 

For me, choosing an arbitrary value for sleep is not a good practice, and I would use BlockingQueue in this situation, but I would like to know if there is more than one reason why such code should not be written.

+6
source share
6 answers

It imposes an average delay of 50 milliseconds before the event takes effect, and it wakes up 10 times per second when there is no event to handle. If none of these things matter, then it is simply inelegant.

+6
source

There are many reasons not to do this. Firstly, as you noted, this means that there can be a big delay between the time when the event that the thread should respond to and the actual response time, since the thread can sleep. Secondly, since any system has only so many different processors, if you need to continue kicking important threads from the processor so that they can say that the thread will fall asleep another time, you reduce the total amount of useful work performed by the system and increase energy use systems (which matters in systems such as telephones or embedded devices).

+1
source

A loop is a great example of what you don't need to do.;)


 Thread.currentThread().sleep(100); 

There is no need to get currentThread () as this is a static method. This is the same as

 Thread.sleep(100); 

 catch (Exception e) {} 

This is a very bad practice. So bad that I would not suggest that you even put this in examples, as someone could copy the code. A good portion of the questions on this forum will be resolved by listing and reading this exception.


 You don't need to busy wait here. esp. when you expect to be waiting for such a long time. Busy waiting can make sense if you expect to be waiting a very very short amount of time. eg // From AtomicInteger public final int getAndSet(int newValue) { for (;;) { int current = get(); if (compareAndSet(current, newValue)) return current; } } 

As you can see, it should be quite rare that this cycle needs to be dispensed more than once, but is exponentially less likely to go around many times. (In a real application, not in micro-control) This cycle can be as short as 10 ns, which is not a long delay.


He could wait 99 ms unnecessarily. Let's say a producer adds a record 1 ms later, he waited a long time for nothing.

The solution is simpler and more understandable.

 BlockingQueue<E> queue = E e = queue.take(); // blocks until an element is ready. 

The list / queue will only change in another thread, and a simpler model for managing threads and queues is to use the ExecutorService

 ExecutorService es = final E e = es.submit(new Runnable() { public void run() { doSomethingWith(e); } }); 

As you can see, you do not need to work directly with queues or threads. You just need to say what you want the thread pool to do.

+1
source

You also enter race conditions in your class. if you used a blocking queue instead of a regular list, the stream is blocked until a new record appears in the list. In your case, the second thread can put and receive an item from the list while your workflow is asleep, and you wouldn't even notice.

0
source

To add to the other answers, you also have a race condition if you have several threads removing items from the queue:

  • the queue is empty
  • thread A queues an item
  • thread B checks if the queue is empty; is not
  • thread C checks if the queue is empty; is not
  • thread B takes from the queue; Success
  • thread C takes from the queue; malfunction

You can handle this atomically (inside a synchronized block) by checking if the queue is empty and iff is not, taking an element from it; now your loop looks just uglier:

 T item; while ( (item = tryTake(someList)) == null) { try { Thread.currentThread().sleep(100); } catch (InterruptedException e) { // it almost never a good idea to ignore these; need to handle somehow } } // Do something with the item synchronized private T tryTake(List<? extends T> from) { if (from.isEmpty()) return null; T result = from.remove(0); assert result != null : "list may not contain nulls, which is unfortunate" return result; } 

or you could just use BlockingQueue .

0
source

I cannot add directly to David's wonderful answers, templatetypedef, etc. - if you want to avoid cross-thread comm-delays and resource-wastes, do not do inter-thread comms with sleep () loops.

Proactive scheduling / scheduling:

At the CPU level, interrupts are the key. The OS does nothing until an interruption occurs, which is why its code will be entered. Please note that in OS terms, interrupts come in two flavors - “real” hardware interrupts that cause drivers to start and “software interrupts” - these are system OS calls from already running threads that can potentially cause a set of current threads to change. Keys, mouse movements, network cards, drives, page errors generate hardware interrupts. The wait and signal functions and sleep () fall into this second category. When a hardware interrupt causes the driver to start, the driver performs any hardware control that it is intended to execute. If the driver needs to signal the OS that some thread needs to be started (perhaps the disk buffer is now full and needs to be processed), the OS provides an input mechanism that the driver can call instead of directly executing the interrupt, return yourself, (important! )

Interrupts, like the examples above, can create threads that have been waiting to be started and / or can cause a thread that is running to enter a wait state. After processing the interrupt code, the OS applies its scheduling algorithm / s to decide whether the set of threads that were started before the interrupt was the same as the set to be started. If so, the OS simply interrupts, returns, if not, the OS must preempt one or more running threads. If the OS needs to crowd out a thread that runs on a processor core that is not an interrupt handler, it should take control of that CPU core. He does this with a “real” hardware interrupt - the OS interprocessor driver sets up a hardware signal that severely interrupts the kernel, which must be unloaded.

When the thread to be unloaded enters the OS code, the OS can retain the full context for the stream. Some registers will already be saved on the stream stack using the interrupt entry, and therefore saving the stream stack pointer will effectively “save” all these registers, but the OS will usually need to do more, for example. caches may need to be cleared, FPU state may need to be maintained, and in the case when the new thread that should be started belongs to a different process than the one that should be unloaded, the memory protection registers should be replaced. Usually, the OS switches from the intermittent thread stack onto the private OS stack as soon as possible to avoid overlapping OS stack requirements on each thread stack.

After saving / saving the context / s, the OS can "exchange" the extended context / sec for new threads / s that should be started. Now the OS can finally load the stack pointer for the new thread / s and perform interrupt-returns to create new ready-made threads.

Then the OS does nothing. Running threads are executed until another interrupt (hard or soft) occurs.

Important points:

1) The OS kernel should be considered as a large interrupt handler, which can decide to interrupt-return to a different set of threads than those that were interrupted.

2) The OS can control and stop, if necessary, any thread in any process, regardless of what state it is in or in which kernel it can work.

3) Proactive planning and scheduling generate all the synchronization problems, etc. that are published in these forums. Great potential is a fast response at the flow level to hard interruptions. Without it, all those high-performance applications that you run on your PC - streaming video, fast networks, etc., will be almost impossible.

4) The OS timer is just one of a large set of interrupts that can change the set of running threads. "Time-slicing", (ugh - I hate this term), between ready-made threads occurs only when the computer is rebooted, i.e. the set of ready-made threads is greater than the number of CPU cores available to run them. If any text that includes an explanation of OS planning mentions a “temporary break” before “interruptions,” this is more likely to cause more confusion than an explanation. Interrupting the timer is only “special” in that many system calls have timeouts for backing up their main function (OK, for sleep (), the IS timeout is the main function :).

0
source

Source: https://habr.com/ru/post/905798/


All Articles