Why use different queues when creating FixedThreadPool and CachedThreadPool?

Artists # newFixedThreadPool:

public static ExecutorService newFixedThreadPool(int nThreads) { return new ThreadPoolExecutor(nThreads, nThreads, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>()); } 

Artists # newCachedThreadPool:

 public static ExecutorService newCachedThreadPool() { return new ThreadPoolExecutor(0, Integer.MAX_VALUE, 60L, TimeUnit.SECONDS, new SynchronousQueue<Runnable>()); } 

Why do two threadPools use a different queue? I was looking for java doc about LinkedBlockingQueue and SynchronousQueue , but I still don't know why they are used here, is performance or others considered?

+5
source share
1 answer

The answer is given in the documentation of the ThreadPoolExecutor class:

  • <dt> Queuingdt>
    • Any {@link BlockingQueue} can be used to carry and hold.
    • sent tasks. Using this queue interacts with the pool size: * *
      • * * *
      • If fewer corePoolSize threads are running, Executor * always prefers to add a new thread * rather than queueing. * * *
      • If corePoolSize or more threads are running, Executor * always prefers to place a request rather than adding new * thread. * * *
      • If the request cannot be queued, a new thread is created, if * it does not exceed maximumPoolSize, in which case the task will be rejected. * * *
      * * There are three general strategies for queues: * * *
    • Direct handoffs. A good default choice for a work queue is {@link SynchronousQueue}, which transfers tasks to threads * without otherwise holding them. Here, an attempt to queue a task * will fail if the threads are not immediately available for launch, so * a new thread will be built. This policy avoids blocking when * processes sets of requests that may have internal dependencies. Direct transmissions typically require unlimited maximum values ​​to * avoid rejection of new tasks. This, in turn, allows * the possibility of unlimited flow growth when teams continue * to arrive on average faster than they can be processed. * * *
    • Unlimited Queues. Using an unlimited queue (for * an example of a {@link LinkedBlockingQueue} without a predefined * capacity) will cause new queue waiting tasks when all * corePoolSize threads are busy. Thus, no more than corePoolSize * threads will be created. (And also the value of maximumPoolSize * therefore has no effect.) This may be appropriate when * each task is completely independent of the others, therefore tasks cannot * affect each other's execution; for example, on a web page server. * Although this sequence style can be useful in smoothing * temporary breaks in requests, it allows the possibility * of unlimited growth in the queue of work when teams continue to arrive * on average faster than they can be processed. * * *
    • Limited queues. A limited queue (for example * {@link ArrayBlockingQueue}) helps prevent resource exhaustion when * is used with a finite maximum value of PoolSizes, but it can be more difficult to set up and control. Queue sizes and maximum pool sizes can be sold * disabled for each other: the use of large queues and small pools minimize * CPU usage, OS resources and context switching overhead, but can * lead to artificially low bandwidth. If tasks are often blocked (for example, if they are related to I / O), the system may be able to schedule * time for more threads than you otherwise allow. Using small queues * usually requires large pool sizes, which keeps the processors more busy, but * may run into unacceptable scheduling overhead, which also reduces throughput.

In practice, the first type of queue immediately sends a new Runnable to available threads; the second type holds it if all threads are busy.

0
source

Source: https://habr.com/ru/post/1240793/


All Articles