Understanding Request Per Second on Apache Scanner

I am trying to understand the performance benefits gained by gevent, and wanted to quantify it. Based on this SO answer , I created a simple django application that wakes up for 3 seconds on a simple view.

When I start the guns with 1 synchronous worker, I get a predictable request per second ~ 3 seconds when I test the use of firefox. When I open only one browser, I see that it takes more time (~ 3 * the number of browsers that I open). The front server is nginx.

My opinion:

def home(request,epoch): sleeptime = 3 time.sleep(sleeptime); html = "<html><body>%s </body></html>" % str(float(time.time()) - float(epoch) / 1000) return HttpResponse(html) 

However, when I run the following command ab ab -r -n 100 -c 100 http://ec2-compute-1.amazonaws.com/

with the next shooting team,

 gunicorn --pythonpath=/home/ubuntu/webapps/testt -c /home/ubuntu/webapps/testt/gunicorn.conf.py testt.wsgi 

I get the following results:

 Concurrency Level: 100 Time taken for tests: 7.012 seconds Complete requests: 100 Requests per second: 14.26 [#/sec] (mean) Time per request: 7011.750 [ms] (mean) Time per request: 70.118 [ms] (mean, across all concurrent requests) Transfer rate: 40.53 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 37 837 1487.4 98 3869 Processing: 0 3475 913.2 3777 3890 Waiting: 0 3467 911.2 3777 3889 Total: 3141 4312 1150.1 3870 7011 

How can I understand the behavior (3 * browser connections) from the above results?

So, if there is only one working and 100 clients that connect, for the 100th connection it really takes 100 * 3 = 300 seconds (because the 100th connection should wait for all the other connections to complete) BTW, if I open 3 browsers and connect I take about 9 seconds of the worst case, which makes sense, I think :). So, my first question is: how did it happen that I get ~ 7 seconds for time per request in the above ab results? I would expect this to be much larger, since only the first 2 connections took about 6 seconds and all the nth parallel connections would return n * 3 seconds.

When I do not change the program and just do the following:

 gunicorn -k gevent --pythonpath=/home/ubuntu/webapps/testt -c /home/ubuntu/webapps/testt/gunicorn.conf.py testt.wsgi 

and run the same ab command, I get the following (!):

 Concurrency Level: 100 Time taken for tests: 0.558 seconds Complete requests: 100 Failed requests: 0 Write errors: 0 Total transferred: 255300 bytes HTML transferred: 241600 bytes Requests per second: 179.32 [#/sec] (mean) Time per request: 557.675 [ms] (mean) Time per request: 5.577 [ms] (mean, across all concurrent requests) Transfer rate: 447.06 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 33 44 6.2 47 58 Processing: 47 133 74.0 122 506 Waiting: 46 132 74.2 122 506 Total: 80 177 76.2 162 555 

So, the smallest time you need to complete to complete the request is 3 seconds, but the above results indicate that it takes less than a second, which makes no sense.

What am I missing?

I hooked up a client that is viewing the view (it looks like, but not exactly, the one that is mentioned in the SO questions related above:

 <html> <head> <title>BargePoller</title> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.min.js" type="text/javascript" charset="utf-8"></script> <style type="text/css" media="screen"> body{ background:#000;color:#fff;font-size:.9em; } .msg{ background:#aaa;padding:.2em; border-bottom:1px #000 solid} .old{ background-color:#246499;} .new{ background-color:#3B9957;} .error{ background-color:#992E36;} </style> <script type="text/javascript" charset="utf-8"> function addmsg(type, msg){ /* Simple helper to add a div. type is the name of a CSS class (old/new/error). msg is the contents of the div */ $("#messages").append( "<div class='msg "+ type +"'>"+ msg +"</div>" ); } function waitForMsg(){ /* This requests the url "msgsrv.php" When it complete (or errors)*/ var currentdate = new Date(); $.ajax({ type: "GET", url: "server/" + currentdate.getTime() + "/", async: true, /* If set to non-async, browser shows page as "Loading.."*/ cache: false, timeout:50000, /* Timeout in ms */ success: function(data){ /* called when request to barge.php completes */ addmsg("new", data); /* Add response to a .msg div (with the "new" class)*/ setTimeout( waitForMsg, /* Request next message */ 1000 /* ..after 1 seconds */ ); }, error: function(XMLHttpRequest, textStatus, errorThrown){ var currentdate = new Date(); addmsg("error", currentdate.getTime() + textStatus + " (" + errorThrown + " " + currentdate.getTime() + ")"); setTimeout( waitForMsg, /* Try again after.. */ 1000); /* milliseconds (15seconds) */ }, beforeSend: function(){ var currentdate = new Date(); addmsg("body", currentdate + " <- sent request"); } }); }; $(document).ready(function(){ waitForMsg(); /* Start the inital request */ }); </script> </head> <body> <div id="messages"> <div class="msg old"> BargePoll message requester! </div> </div> </body> </html> 
+4
source share

Source: https://habr.com/ru/post/1502142/


All Articles