I am running a Scrapy project in Django on an Ubuntu server. The problem is that Scrapy accidentally crashes, even if only one spider works.
Below is a snippet of TraceBack. As an expert, I have googled
_SIGCHLDWaker Scrappy
but could not understand the solutions found for the fragment below:
--- <exception caught here> ---
File "/home/b2b/virtualenvs/venv/local/lib/python2.7/site-packages/twisted/internet/posixbase.py", line 602, in _doReadOrWrite
why = selectable.doWrite()
exceptions.AttributeError: '_SIGCHLDWaker' object has no attribute 'doWrite'
I am not familiar with the perverse, and it seems to me very unfriendly, despite trying to understand it.
Below is the full trace:
2015-10-10 14:17:13,652: INFO/Worker-4] Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, RandomUserAgentMiddleware, ProxyMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
[2015-10-10 14:17:13,655: INFO/Worker-4] Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
[2015-10-10 14:17:13,656: INFO/Worker-4] Enabled item pipelines: MadePipeline
[2015-10-10 14:17:13,656: INFO/Worker-4] Spider opened
[2015-10-10 14:17:13,657: INFO/Worker-4] Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
Unhandled Error
Traceback (most recent call last):
File "/home/b2b/virtualenvs/venv/local/lib/python2.7/site-packages/twisted/python/log.py", line 101, in callWithLogger
return callWithContext({"system": lp}, func, *args, **kw)
File "/home/b2b/virtualenvs/venv/local/lib/python2.7/site-packages/twisted/python/log.py", line 84, in callWithContext
return context.call({ILogContext: newCtx}, func, *args, **kw)
File "/home/b2b/virtualenvs/venv/local/lib/python2.7/site-packages/twisted/python/context.py", line 118, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/home/b2b/virtualenvs/venv/local/lib/python2.7/site-packages/twisted/python/context.py", line 81, in callWithContext
return func(*args,**kw)
File "/home/b2b/virtualenvs/venv/local/lib/python2.7/site-packages/twisted/internet/posixbase.py", line 602, in _doReadOrWrite
why = selectable.doWrite()
exceptions.AttributeError: '_SIGCHLDWaker' object has no attribute 'doWrite'
This is how I completed my task in the scrapy documentation
from scrapy.crawler import CrawlerProcess, CrawlerRunner
from twisted.internet import reactor
from scrapy.utils.project import get_project_settings
@shared_task
def run_spider(**kwargs):
task_id = run_spider.request.id
status = AsyncResult(str(task_id)).status
source = kwargs.get("source")
pro, created = Project.objects.get_or_create(name="b2b")
query, _ = SearchTerm.objects.get_or_create(term=kwargs['query'])
src, _ = Source.objects.get_or_create(term=query, engine=kwargs['source'])
b, _ = Bot.objects.get_or_create(project=pro, query=src, spiderid=str(task_id), status=status, start_time=timezone.now())
process = CrawlerRunner(get_project_settings())
if source == "amazon":
d = process.crawl(ComberSpider, query=kwargs['query'], job_id=task_id)
d.addBoth(lambda _: reactor.stop())
else:
d = process.crawl(MadeSpider, query=kwargs['query'], job_id=task_id)
d.addBoth(lambda _: reactor.stop())
reactor.run()
I also tried something like this tutorial , but this leads to another problem, due to which I could not get the trace
For completeness, here is a fragment of my Spider
class ComberSpider(CrawlSpider):
name = "amazon"
allowed_domains = ["amazon.com"]
rules = (Rule(LinkExtractor(allow=r'corporations/.+/-*50/[0-9]+\.html', restrict_xpaths="//a[@class='next']"),
callback="parse_items", follow=True),
)
def __init__(self, *args, **kwargs):
super(ComberSpider, self).__init__(*args, **kwargs)
self.query = kwargs.get('query')
self.job_id = kwargs.get('job_id')
SignalManager(dispatcher.Any).connect(self.closed_handler, signal=signals.spider_closed)
self.start_urls = (
"http://www.amazon.com/corporations/%s/------------"
"--------50/1.html" % self.query.strip().replace(" ", "_").lower(),
)