How to catch a memory quota exception from a hero worker

I use delayed_job to process my background jobs with heroku. Sometimes I step over the memory allocation and get things like:

2011-11-16T02: 41: 25 + 00: 00 heroku [worker.1]: Error R14 (memory quota exceeded) 2011-11-16T02: 41: 45 + 00: 00 heroku [worker.1]: The process is running mem = 542M (106.0%)

I would elegantly handle this. Is there any way to know when I am going to break the memory limit?

Something like rack-timeout would be awesome

Thanks!

+6
source share
4 answers

I think I found a good solution for this by stealing some code from Oink . In particular, this file is: memory_snapshot.rb , which you should read. It describes 4 different ways to use memory uselessly.

So there is no way to get this at the Rack level, you need to add a memory check during the process that causes the memory problem (in my case, it created the csv file).

So, in this loop, it looked something like this:

def build_string_io(collection) csv_io = StringIO.new csv_io << collection.first.to_comma_headers.join(',') + "\n" collection.each do |imp| csv_io << imp.to_comma.join(',') + "\n" check_memory! end csv_io.rewind csv_io end def check_memory! raise 'AboutToRunOutOfMemory' if memory > 400.megabytes #Or whatever size your worried about end # Taken from Oink def memory pages = File.read("/proc/self/statm") pages.to_i * self.class.statm_page_size end def self.statm_page_size @statm_page_size ||= begin sys_call = SystemCall.execute("getconf PAGESIZE") if sys_call.success? sys_call.stdout.strip.to_i / 1024 else 4 end end end 
+6
source

The problem is that you need data that is accessible only from the logs.

The best approach here would be to use syslog drain to send your logs to a service like Papertrailapp.com or Loggly - with these services you can configure your R14 error search, but then receive notifications - Papertrail supports hikes, messages, emails, etc. d. where you can handle the error.

We do this accurate publication of the process in the sinatra application, also hosted on Heroku, where we look at the heroku router log entries and queues = sizes or for too many errors, and then automatically scale our applications when demand is needed - because syslog is almost real of time, our applications, in fact, are aware of themselves.

+2
source

The problem you are facing is that when you get this error, you are already unloaded from the Ruby process, and the Heroku platform handles the error. No beginning, salvation, end will help you.

From an observation point of view, you can potentially look at the amount of free memory by running something like:

 memory = `free -m` 

You can then analyze these results to get a meaningful memory status. However, I'm not sure what you can do with this information.

(Remember that dyno is just a unix block, and you can run arbitrary system commands using Ruby, wrapping the commands in the opposite direction)

+1
source

I used this system call, taken from this script memory_snapsot.rb from the oink gem:

system("ps -o vsz= -p #{$$}")

This way, you may have some ideas about memory expansion and where they can be created in your code.

+1
source

Source: https://habr.com/ru/post/901582/


All Articles