Does multithreading improve performance if there are a lot of queries at the same time?

I am working on a Java web application using Spring 3 hosted on Tomcat 7, which should handle more than 2.5 thousand requests per second. I have a RequestProcesseor class that processes every HTTP request using this method:

 @Service public class RequestProcesseor { public void processSomething(int value1, int value2) { // process something... // create and deep copy some object // some BigDecimal calculations // maybe some webservice calls } } 

At the same time, more than 2.5 thousand requests were requested and the method processSomething will be processSomething . If I made this class multithreaded. Will performance improve? If so, why? And how can I prove it?

The server has a 4-core processor.

+4
source share
5 answers

Even if you are not doing multithreading explicitly, your application server implicitly sends each request to its own thread, so you already have a parallel load on the CPU.

Parallel code will help you if your request processing is limited by CPU, which is very rare. Usually the bottleneck is the database or, more generally, the interface to other background subsystems.

If each request is processed by crunching a large amount of data in memory, and if the request load per second is low, then it can pay off by carefully dividing the work into several threads, no more than the actual number of processor cores.

Therefore, since your server is quite heavily loaded, it is almost certainly impossible to improve its performance by sending work to multiple threads. Please note that it is dangerous to easily disrupt multithreading performance.

+7
source

Note that Tomcat is already multithreaded.

It is not always useful to do multithreading yourself inside an application server or web container.

An application server or web container is already multithreaded in requests.

Read the documentation and / or source code for Tomcat.

+5
source

Will performance improve?

May be. You have 2.5 thousand requests per second. If each request took 1 s of processor time and you had one processor, this is not so. If you had 2 processors, then yes. If every request spoke to remote pools of network resources, then yes. If each request spoke with the same network resource (i.e., Not merged), then no.

In short, you need to provide more information about what you are doing, and (most useful) to run the test yourself, using your specific environment.

+3
source

Short answer: Yes.

Each time a request arrives, if you use only one thread to execute this method, this means that the next request in the line must wait until the previous one has been processed.

If you create a new thread every time a request arrives, you will process all requests as soon as resources are available on your system. Your threads will be cleared after they are completed anyway.

+2
source

Short answer: you have to measure it yourself.

Longer answer: if you send your work to one thread in the background and the HTTP request processor is waiting for this background thread to finish (what else will it do?), You just added the overhead for your application

You are welcome

  • measure if he has some influence
  • if you have a positive impact: check the impact you are measuring and determine whether to add sophisticated technical support with performance.

Or more generally:

  • Enter code with optimized maintainability
  • Stress test the application until it breaks
  • If it breaks earlier than you expect it to break:
    • identify a bottleneck (memory, processor, i / o, others)
    • fix your identified issue # 1
    • continue at step 2
  • Congratulations. Your application serves the load that you expect, and the code is in the best maintained state, it can meet its performance requirements.
+1
source

Source: https://habr.com/ru/post/1489906/


All Articles