Secondly, the docs say The logging service must be explicitly started through the scrapy.log.start() function . My question is: where to run this scrapy.log.start ()? Is it inside my spider?
If you start a spider using scrapy crawl my_spider , the log starts automatically if STATS_ENABLED = True
If you start the crawler process manually, you can do scrapy.log.start() before starting the crawler process.
from scrapy.crawler import CrawlerProcess from scrapy.conf import settings settings.overrides.update({})
A little knowledge about your first question:
Since you must manually run the scrapy log, this will allow you to use your own logger.
I think you can copy the scrapy/scrapy/log.py into scrapy/scrapy/log.py sources, modify it, import it instead of scrapy.log and run start() - scrapy will use your log. It has a line in the start() function that says log.startLoggingWithObserver(sflo.emit, setStdout=logstdout) .
Make your own observer ( http://docs.python.org/howto/logging-cookbook.html#logging-to-multiple-destinations ) and use it there.
source share