Putting a log into a method called QC will be the source of logging. For example, in rails. Any call to Rails.logger will go to the log file that matches your RAILS_ENV. Log data coming from scrolls is sent to stdout, so you can send STDOUT from your queues to the log file when they start.
You can control your queues with god.rb, giving an instance of god.rb configuration similar to this (I left your configuration for the number of queues, directories, etc. up to you):
number_queues.times do |queue_num| God.watch do |w| w.name = "QC-#{queue_num}" w.group = "QC" w.interval = 5.minutes w.start = "bundle exec rake queue:work"
code> FWIW, I find the STDOUT log data less useful and may end up just sending it to a bitbucket.
If the log data is useless, we should consider deleting it.
DEBUG is very nice when I run it from the command line. In those days, when I try to figure out what I did to collect everything (usually related problems, path problems, etc.). Or the time I am going to demo what happens in line.
For me, INFO registration consists of the standard message lib = queue_classic level = info action = insert_job elapsed = 16 and any STDOUT / STDERR from forked executions or from PostgreSQL. I have not expanded any of the logging classes since the scrolls are sent to STDOUT and my tasks are in an environment that provides logging.
Of course, it could be deleted. I think that REALLY depends on the environment and on what the queue does. If I were to do something that didn't have Rails.logger, I would use QC.log and scroll more efficiently and handle my tasks this way.
As I play with this, I can save my configuration as is only because of the output coming from the methods / applications called by the tasks themselves. I can decide to override the QC.log code to add a date / time. I am still working to determine what suits my needs.
Sorry, my last line was really focused on the example environment that I gave.
the original is here: enter the link here