How to gracefully close the Mongrel web server

My RubyOnRails application is configured with the regular mogrels package behind the Apache configuration. We noticed that the memory usage of our Mongrel server can increase significantly with certain operations, and we really would like to dynamically perform graceful restarts of selected Mongrel processes at any time.

However, for reasons that I will not go into here, it is sometimes very important that we do not interrupt Mongrel while it is serving the request, so I assume that the simple kill process is not the answer.

Ideally, I want to send Mongrel a signal that says "finish everything you do, and then exit before accepting any connections."

Is there a standard methodology or best practice for this?

+4
source share
6 answers

I did a bit more research in the Mongrel source, and it turned out that Mongrel installs a signal handler to detect the standard kill (TERM) process and does an elegant shutdown, so I don't need a special procedure after all.

You can see how this works with the log output you get by killing Mongrel while processing the request. For instance:

** TERM signal received. Thu Aug 28 00:52:35 +0000 2008: Reaping 2 threads for slow workers because of 'shutdown' Waiting for 2 requests to finish, could take 60 seconds.Thu Aug 28 00:52:41 +0000 2008: Reaping 2 threads for slow workers because of 'shutdown' Waiting for 2 requests to finish, could take 60 seconds.Thu Aug 28 00:52:43 +0000 2008 (13051) Rendering layoutfalsecontent_typetext/htmlactionindex within layouts/application 
+12
source

Take a look at using monit. You can dynamically restart mongrel based on memory or processor usage. Here is the line from the configuration file that I wrote for my client.

 check process mongrel-8000 with pidfile /var/www/apps/fooapp/current/tmp/pids/mongrel.8000.pid start program = "/usr/local/bin/mongrel_rails cluster::start --only 8000" stop program = "/usr/local/bin/mongrel_rails cluster::stop --only 8000" if totalmem is greater than 150.0 MB for 5 cycles then restart # eating up memory? if cpu is greater than 50% for 8 cycles then alert # send an email to admin if cpu is greater than 80% for 5 cycles then restart # hung process? if loadavg(5min) greater than 10 for 3 cycles then restart # bad, bad, bad if 3 restarts within 5 cycles then timeout # something is wrong, call the sys-admin if failed host 192.168.106.53 port 8000 protocol http request /monit_stub with timeout 10 seconds then restart group mongrel 

Then you repeat this configuration for all of your monger cluster instances. The monit_stub line is the empty file that monit is trying to load. If it cannot, it also tries to restart the instance.

Note: resource monitoring does not seem to work on OS X with the Darwin kernel.

+5
source

It’s better to ask how to make your application consume so much memory that it requires you to restart mongrels from time to time.

www.modrails.com has significantly reduced memory footprint.

+1
source

Boggy:

If you have one process, it will be gracefully turned off (serve all requests in the queue, which should be only 1 if you use the correct load balancing). The problem is that you cannot start the new server until the old one dies, so your users will queue in the load balancer. What I found successful is a cascade or a rolling restart of the bastards. Instead of stopping them all and starting them all (therefore, queuing requests until one mongrell is complete, stopped, restarted and will not accept connections), you can stop, then start each mongrel sequentially, blocking a call to restart the next mongrel until the previous one backs up (use a real HTTP check on the state / state controller). As your curses are scolded, only one down and you are serving two code bases - if you cannot do this, you should drop the service page for a minute. You should be able to automate this with capistrano or any other deployment tool.

So, I have 3 tasks: cap: deploy - which traditionally restarts everything at the same time with the help of the hook, which places the service page, and then takes it after checking HTTP. cap: deploy: roll - what does this cascade do through the machine (I pull it out of iClassify to find out how many mongrels there are on this computer) without a maintenance page. cap deploy: migrations - which makes the service + migration page, since it is usually a bad idea to start a migration in real time.

+1
source

Try using:

 mongrel_cluster_ctl stop 

You can also use:

 mongrel_cluster_ctl restart 
0
source

got a question

What happens when / usr / local / bin / mongrel _rails cluster :: start starts - only 8000?

- all the requests served by this particular process to the end? or are they interrupted?

I am curious if all this can be started / restarted without affecting end users ...

0
source

Source: https://habr.com/ru/post/1276437/


All Articles