Tomcat Performance Tuning

I tune the performance of Tomcat 7, the server is equipped with 24 cores and 32 gigabyte memory, my test interface is a RESTful service without any process (the answer is immediate), and the configuration of server.xml looks like this ..

<Connector port="8088" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" enableLookups="false" compression="off" maxConnections="8192" maxThreads="1000" tcpNoDelay="true"/> 

and JVM configurations ...

 -Xms8192M -Xmx16384M. 

The JMeter host is another computer that has the same specification with the above server. And the JMeter heap configuration is -Xms12218m -Xmx24426m .

My JMeter testing plan is 240 requests sent to the RESTful interface at the same time, but I noticed that the average response time for the first 100 does not exceed 50 ms, but it increases to 1 second in the next 100 and up to 3 seconds for the rest.

I'm curious about this phenomenon, are there any errors in the configurations or sentences?

Thanks in advance.

+6
source share
2 answers

You can configure:

 acceptCount="2048" 

and

 maxConnections="1024" 

MaxConnections has a connection with maxThreads, and you must configure maxThreads to match your business and processor core number, such as 8X or 16X. acceptCount is the number of the pending connection.

Please note that maxConnections and maxThreads are not much better than the performance of your server hardware.

+6
source

The more requests your server requires, the more time it takes to service each request. This is normal behavior.

How do you start your topics at the same time? Ramp time = 0 or 1?

When you start downloading threads, your client takes longer to make requests, and your server takes longer to respond.

At startup, the server can quickly respond to all requests, as it has nothing more to do until it reaches the threshold. Each of these requests will complete quickly, and the same thread will send another request. In the meantime, the server responds to the previous wave of flows, while more and more queues. Now it must manage the queues, still responding to requests, so a different threshold value is executed.

In principle, running multiple threads and shelling requests at the same time is not a very realistic use case for the server, except in a few cases. When relevant, you can expect this behavior.

0
source

Source: https://habr.com/ru/post/977626/


All Articles