Does the “blocking” queue defeat the multithreading target itself

ArrayBlockingQueue blocks the producer thread if the queue is full and blocks the consumer thread if the queue is empty.

Doesn't this concept of blocking contradict the very idea of ​​multithreading? if I have a "main" thread, and let's say I want to delegate all the "Logging" actions to another thread. So basically in my main thread, I create Runnable to register the output, and I put Runnable in an ArrayBlockingQueue. The whole purpose of this is to immediately return the "main" stream without wasting time on an expensive registration operation.

But if the queue is full, the main thread will be blocked and will wait until the place is available. So how does this help us?

+4
source share
8 answers

I think this is a design decision. If he chose lock mode, ArrayBlockingQueue provides him with a put method. If desiner does not want the ArrayBlockingQueue blocking mode to have an offer method that returns false when the queue is full, but then it needs to decide what to do with the registered registration event.

+5
source

The queue does not block maliciousness; it blocks the introduction of additional quality into the system. In this case, the prevention of starvation .

Depict a set of threads, one of which will really work quickly. If the line-up was to be limited by unlimited growth, potentially, the line-up of the “fast producer” could defame the entire production capacity. Sometimes preventing these side effects is more important than unlocking all threads.

+7
source

Locking is a necessary multithreading feature. You must block synchronized access to data. This does not negate the goals of multithreading.

I suggest throwing an exception when the manufacturer tries to send an item to a populated queue. There are methods to check if the potential is really filled in advance.

This will allow the calling code to decide how it wants to handle the full queue.

If the order of execution when processing items from the queue is not important, I recommend using threadpool (known as ExecutorService in Java).

+2
source

In your example, I would consider locking as a function: it prevents OutOfMemoryError.

Generally speaking, one of your threads is simply not fast enough to handle the assigned load. Therefore, others must somehow slow down so as not to endanger the entire application.

On the other hand, if the load is balanced, the queue will not be blocked.

+2
source

It depends on the nature of your multi-threaded philosophy. For those of us who prefer Conversational Process Communication, the blocking queue is almost perfect. In fact, the ideal one would be where the message cannot be queued at all if the receiver is not ready to receive it.

No, I don’t think that the blocking queue runs counter to the very goal of multithreading. In fact, the scenario you are describing (the main thread is ultimately stalled) is a good illustration of a serious problem with the actor-model of multithreading; you do not know if it will be blocked / blocked, and you also cannot exhaustively check it.

In contrast, imagine a blocking queue with zero messages. Thus, for the system to work in general, you had to find a way to ensure that the log is always guaranteed to be able to receive a message from the main stream. This is a CSP. This may mean that in your hypothetical log stream you should have the application-defined buffering (as opposed to which framework designer best guesses how deep the FIFO should be), the fast I / O subsystem checks to see if , lag, etc. In short, this does not allow you to get away from it, you are forced to solve all aspects of the performance of your system.

This, of course, is more complicated, but in this way you get a system that is definitely OK, and not the dubious “maybe” that you have if your blocking queues are an unknown number of messages.

+1
source

It looks like you have a general idea of ​​why you are using something like ArrayBlockingQueue to talk between threads.

Having a lock queue gives you the ability to do something else if something goes wrong with your workflow threads, rather than blindly adding more requests to the queue. If there is space in the queue, the lock does not exist.

In your specific use case, I would use an ExecutorService instead of directly reading / writing queues, which creates a pool of background worker threads:

http://docs.oracle.com/javase/6/docs/api/java/util/concurrent/ExecutorService.html

 pool = Executors.newFixedThreadPool(poolSize); pool.submit(myRunnable); 
0
source

You must choose what to do when the queue is full. In the case of an array lock queue, this choice should wait.

Another option is to simply discard new objects if the queue has been filled; You can achieve this with.

You have to compromise.

0
source

A multi-threaded program is not deterministic, because you cannot say in advance: n the manufacturer’s actions will be performed exactly until the user’s actions are completed. Therefore, in each case, synchronization between n producers and m consumers is necessary.

You want to choose a queue size to increase the number of active producers and consumers in most cases. But the java thread model does not guarantee that any consumer will work if it is not the only unlocked thread. (Nevertheless, of course, on multi-core processors it is very likely that the consumer will work).

0
source

Source: https://habr.com/ru/post/1489953/