My scraper works fine when I run it from the command line, but when I try to run it from a python script (using the method described here using Twisted), it does not output the two CSV files that it usually executes. I have a pipeline that creates and populates these files, one of them uses CsvItemExporter (), and the other using writeCsvFile (). Here is the code:
class CsvExportPipeline(object): def __init__(self): self.files = {} @classmethod def from_crawler(cls, crawler): pipeline = cls() crawler.signals.connect(pipeline.spider_opened, signals.spider_opened) crawler.signals.connect(pipeline.spider_closed, signals.spider_closed) return pipeline def spider_opened(self, spider): nodes = open('%s_nodes.csv' % spider.name, 'w+b') self.files[spider] = nodes self.exporter1 = CsvItemExporter(nodes, fields_to_export=['url','name','screenshot']) self.exporter1.start_exporting() self.edges = [] self.edges.append(['Source','Target','Type','ID','Label','Weight']) self.num = 1 def spider_closed(self, spider): self.exporter1.finish_exporting() file = self.files.pop(spider) file.close() writeCsvFile(getcwd()+r'\edges.csv', self.edges) def process_item(self, item, spider): self.exporter1.export_item(item) for url in item['links']: self.edges.append([item['url'],url,'Directed',self.num,'',1]) self.num += 1 return item
Here is my file structure:
SiteCrawler/
The scraper seems to function normally in all other ways. The result on the command line suggests that the expected number of pages has been crawled and the spider seems to have finished normally. I do not receive error messages.
---- EDIT: ----
Inserting print instructions and syntax errors into the pipeline has no effect, so the pipeline seems to be ignored. Why could this be?
Here is the code for the script that starts the scraper (runpider.py):
from twisted.internet import reactor from scrapy import log, signals from scrapy.crawler import Crawler from scrapy.settings import Settings from scrapy.xlib.pydispatch import dispatcher import logging from SiteCrawler.spiders.sitecrawler_spider import MySpider def stop_reactor(): reactor.stop() dispatcher.connect(stop_reactor, signal=signals.spider_closed) spider = MySpider() crawler = Crawler(Settings()) crawler.configure() crawler.crawl(spider) crawler.start() log.start(loglevel=logging.DEBUG) log.msg('Running reactor...') reactor.run()