Scrapy Could Not Find Spider Error

I tried to get a simple spider to run using scrapy, but keep getting the error:

Could not find spider for domain:stackexchange.com

when I run the code with the expression scrapy-ctl.py crawl stackexchange.com . The spider is as follows:

 from scrapy.spider import BaseSpider from __future__ import absolute_import class StackExchangeSpider(BaseSpider): domain_name = "stackexchange.com" start_urls = [ "http://www.stackexchange.com/", ] def parse(self, response): filename = response.url.split("/")[-2] open(filename, 'wb').write(response.body) SPIDER = StackExchangeSpider()` 

Another person posted almost the same problem a few months ago, but did not say how they fixed it, Scrapy spider does not work. I watched the turtorial in http://doc.scrapy.org/intro/tutorial.html and I can’t understand why it is not working.

When I run this code in eclipse, I get an error

Traceback (most recent call last): File "D:\Python Documents\dmoz\stackexchange\stackexchange\spiders\stackexchange_spider.py", line 1, in <module> from scrapy.spider import BaseSpider ImportError: No module named scrapy.spider

I can’t understand why it does not find the base Spider module. Should my spider be saved in the scripts directory?

+4
source share
1 answer

try running python yourproject/spiders/domain.py to see if there is a syntax error. I do not think you should include absolute imports, since the replica relies on the import of relatives.

+2
source

Source: https://habr.com/ru/post/1310577/


All Articles