I am doing a tutorial in the credential documentation . This is my current directory as follows:
. βββ scrapy.cfg βββ tutorial βββ __init__.py βββ __init__.pyc βββ items.py βββ pipelines.py βββ settings.py βββ settings.pyc βββ spiders βββ __init__.py βββ __init__.pyc βββ dmoz_spider
dmoz_spider.py is the same as described on the squeak lesson page.
import scrapy class DmozSpider(scrapy.Spider): name = "dmoz" allowed_domains = ["dmoz.org"] start_urls = [ "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/", "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/" ] def parse(self, response): filename = response.url.split("/")[-2] + '.html' with open(filename, 'wb') as f: f.write(response.body)
Then I run this command from the current directory
scrapy crawl dmoz
But I get an error message:
2015-12-17 12:23:22 [scrapy] INFO: Scrapy 1.0.3 started (bot: tutorial) 2015-12-17 12:23:22 [scrapy] INFO: Optional features available: ssl, http11 2015-12-17 12:23:22 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial'} ... raise KeyError("Spider not found: {}".format(spider_name)) KeyError: 'Spider not found: dmoz'
Are there any suggestions in which part I did wrong? I checked a similar question on stack overflow and follow the solution. But I still get the error.
source share