PHP generates pages, but doesn't immediately return them to the user

I am currently testing the server configuration boot ability that I am building. PHP 5.X is installed on the apache2 server, and it connects to the main database on a separate machine, and then 1 of 2 subordinate servers is read.

My test page takes .2 seconds to generate if I call it myself. I created a php script on another server that creates 65 simultaneous calls to the test page. On a test page on microprocessors, all tests are checked throughout the page to indicate how long each section takes. As expected - at least for me, if someone has opinions or suggestions about this, do not hesitate to comment, part of the SQL pages on the page takes a short time for the first couple requests that it receives, and then gets worse because the rest of the requests drain and must wait. I thought it might be a disk I / O problem, but the same behavior happened when testing on a solid state drive.

My problem is that about 30 or about 65 pages are created and loaded by my test script, as I expected. My benchmark showed that the page was created in 3 seconds, and my test script said that it got the page completely in 3.1 seconds. There was not much differential. The problem is that for other requests in my test it is indicated that the pages loaded in 3 seconds, but the test script did not receive the page completely before 6 seconds. which is the full 3 seconds between the page generated by the apache server and sent back to my test script that requested it. To make sure that this is not a problem with the test script, I tried to load the page in a local browser while it was working and got the same delay confirmed through the timeline window in Chrome.

I tried all sorts of configurations for Apache but cannot find what causes this delay. My last attempt below. The machine is an AMD 2.8Ghz quad-core processor with 2 GHz RAM. Any help with customization or other suggestions on what to do would be appreciated. - Sorry for the long question.

I should mention that I controlled the resources while the script was running while it was running, and the CPU reached a maximum load of 9% and always had at least 1 gigabyte of free space.

I also mentioned that the same type happens when all I request is a static HTML page. The first pair takes 0, X seconds, and then slowly reaches 3 seconds.

 LockFile $ {APACHE_LOCK_DIR} /accept.lock PidFile $ {APACHE_PID_FILE} Timeout 120 MaxClients 150 KeepAlive On KeepAliveTimeout 4 MaxKeepAliveRequests 150 Header always append x-frame-options sameorigin StartServers 50 MinSpareServers 25 MaxSPERServers $ 150 MaxReclients $ MaxRequests $ MaxChrelser Maxer_Perlser {APACHE_RUN_GROUP} AccessFileName .httpdoverride Order allow, deny DefaultType text / plain HostnameLookups Off ErrorLog $ {APACHE_LOG_DIR} /error.log LogLevel warn Include mods-enabled / *. Load Include mods-enabled / *. Conf Include httpd.conf Include ports. conf LogFormat "% v:% p% h% l% u% t \"% r \ "%> s% O \"% {Referer} i \ "\"% {User-Agent} i \ "" vhost_combined LogFormat "% {X-Forwarded-For} i% l% u% t \"% r \ "%> s% b \"% {Referer} i \ "\"% {User-Agent} i \ "" combined LogFormat "% h% l% u% t \"% r \ "%> s% O" common LogFormat "% {Referer} i ->% U" referer LogFormat "% {User-agent} i" agent Include conf.d / Include sites-enabled / AddType application / x-httpd-php .php AddType application / x-httpd-php-source.  phps SecRuleEngine On SecRequestBodyAccess On SecResponseBodyAccess Off SecUploadKeepFiles Off SecDebugLog /var/log/apache2/modsec_debug.log SecDebugLogLevel 0 SecAuditEngine RelevantOnly SecAuditLogRelevantStatus ^ 5 SecAuditLogParts ABIFHZ SecAuditLogType Serial SecAuditLog /var/log/apache2/modsec_audit.log SecRequestBodyLimit SecRequestBodyInMemoryLimit 131 072 000 131072 524 288 000 SecResponseBodyLimit ServerTokens Full SecServerSignature "Microsoft-IIS / 5.0" 

UPDATE: It seems that a lot of answers focus on the fact that SQL is the culprit. Therefore, I declare that the same behavior occurs on a static HTML page. Benchmarking results are shown below.

  Concurrency Level: 10
 Time taken for tests: 5.453 seconds
 Complete requests: 1000
 Failed requests: 899
    (Connect: 0, Receive: 0, Length: 899, Exceptions: 0)
 Write errors: 0
 Total transferred: 290877 bytes
 HTML transferred: 55877 bytes
 Requests per second: 183.38 [# / sec] (mean)
 Time per request: 54.531 [ms] (mean)
 Time per request: 5.453 [ms] (mean, across all concurrent requests)
 Transfer rate: 52.09 [Kbytes / sec] received

 Connection Times (ms)
               min mean [+/- sd] median max
 Connect: 0 21 250.7 0 3005
 Processing: 16 33 17.8 27 138
 Waiting: 16 33 17.8 27 138
 Total: 16 54 253.0 27 3078

 Percentage of the requests served within a certain time (ms)
   50% 27
   66% 36
   75% 42
   80% 46
   90% 58
   95% 71
   98% 90
   99% 130
  100% 3078 (longest request)

I also declare that I have determined using PHP and microtime () that the lag occurs before the page is created. I determined this through the time difference between the page being created and my test script receiving it. The difference is consistent with the amount of time from the point at which the page is created, until the moment when my test page receives the same, regardless of how much time the entire request has passed.

Thanks to all who responded. All the good points, I just canโ€™t say that one of them solved the problem.

+6
source share
2 answers

There are many other factors, but I really guess that you quickly create 30-40 processes using 30M or so, each of them kills your computers with limited memory, and then continues to create new ones and shoot down, minimize, slowing everything down.

With 2G of RAM, MaxClients at 150 and MaxRequestsPerChild at 0 server resources are likely to load even if your database is not on the same physical server.

Basically, for web server performance you will never want a swap. Run your tests, and then immediately check the memory on the web server with:

free -m 

This will give you the option of using memory in MB and using swap. Ideally, you should see swap either 0 or close to 0. If you are not using zilch or very low swap usage, the problem is that the memory runs out and your server is interrupted, so it spends the processor, which leads to slow response time.

You need to get some numbers to be sure, but first do a โ€œtopโ€ and press Shift-M while the top is working for sorting by memory. The next time you run your tests and find the ball number, how many% MEM is reported for each httpd process. It will be different, so itโ€™s better to use higher ones as a guide for the worst case. I have wordpress, drupal and the custome website on the same server, which usually distributes 20M to each HTTP process from the very beginning and grows with time upwards - if not 100M each is verified.

Pulling some numbers from my butt, for example, if I had 2G and linux, the main services and mysql used 800M, I would keep the memory expectations, which I would like to assume that there will be less than 1G for Apache-fun. if my apache processes used on the high side an average of 20 M, I could only have 50 MaxClients. This is a very non-conservative number, in real life I would drop Max to 40 or so to be safe. Do not try to pinch memory ... if you are serving enough traffic to have 40 simultaneous connections, raise $ 100 to upgrade to 4G before reaching Max servers. This is one of those when you cross the line, everything goes on the toilet, so stay safe under your limits of memory!

Also, with php, I like to support MaxRequestsPerChild up to 100 or so ... you don't use CPU bound web pages, so don't worry about saving a few milliseconds by creating a new child process. Setting it to 0 means unlimited requests, and they will never be destroyed unless shared clients exceed MaxSpareServers. Usually this is VERY BAD with php using apache workers, as they just keep growing until a bad situation happens (for example, you have to restart the server because you cannot log in because apache has run out of memory and ssh cannot work without a timeout).

Good luck

+2
source

What is the exact number of pages that load before departure? You mentioned that you created 65 simultaneous requests from one external script. You donโ€™t have a mod like limitipconn enabled that will limit things after N connections from one IP or something else? Is it always 30 (or something else) connections and then a delay?

+1
source

Source: https://habr.com/ru/post/889234/


All Articles