Scrapy: Python can't find a spider

I'm trying to follow the Scrapy lesson, but I'm stuck at one of the first steps. I think I created the spider correctly:

class dmoz(BaseSpider): name = "dmoz" allowed_domains = ["dmoz.org"] start_urls = [ "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/", "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/" ] def parse(self, response): filename = response.url.split("/")[-2] open(filename, 'wb').write(response.body) 

I saved this (as dmoz_spider.py) from the IDLE shell by typing the extension .py in this folder, which corresponds to the directory of the terminal window.

However, when I type scrapy crawl dmoz , I get the following:

 2013-08-09 19:18:06+0200 [scrapy] INFO: Scrapy 0.16.5 started (bot: dmoz) 2013-08-09 19:18:07+0200 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState 2013-08-09 19:18:08+0200 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats 2013-08-09 19:18:08+0200 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware 2013-08-09 19:18:08+0200 [scrapy] DEBUG: Enabled item pipelines: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.7/bin/scrapy", line 5, in <module> pkg_resources.run_script('Scrapy==0.16.5', 'scrapy') File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pkg_resources.py", line 499, in run_script self.require(requires)[0].run_script(script_name, ns) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pkg_resources.py", line 1235, in run_script execfile(script_filename, namespace, namespace) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/EGG-INFO/scripts/scrapy", line 4, in <module> execute() File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/cmdline.py", line 131, in execute _run_print_help(parser, _run_command, cmd, args, opts) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/cmdline.py", line 76, in _run_print_help func(*a, **kw) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/cmdline.py", line 138, in _run_command cmd.run(args, opts) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/commands/crawl.py", line 43, in run spider = self.crawler.spiders.create(spname, **opts.spargs) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/spidermanager.py", line 43, in create raise KeyError("Spider not found: %s" % spider_name) KeyError: 'Spider not found: dmoz' 

I can’t understand what’s wrong, but given that I’m new to programming, it can be very easy.

+2
source share
4 answers

You must be in the directory that contains scrapy.cfg :

 stav@maia :/srv/scrapy/tutorial$ ls scrapy.cfg tutorial/ 

The following is a list of files in a project on my system:

 stav@maia :/srv/scrapy/tutorial$ tree . β”œβ”€β”€ scrapy.cfg └── tutorial β”œβ”€β”€ __init__.py β”œβ”€β”€ items.py β”œβ”€β”€ pipelines.py β”œβ”€β”€ settings.py └── spiders β”œβ”€β”€ dmoz_spider.py └── __init__.py 2 directories, 13 files 

You should show us the entire command line that you use to execute the command, including the working directory:

 stav@maia :/srv/scrapy/tutorial$ scrapy crawl dmoz 2013-08-11 11:00:23-0500 [scrapy] INFO: Scrapy 0.17.0 started (bot: tutorial) 2013-08-11 11:00:23-0500 [scrapy] DEBUG: Optional features available: ssl, django, http11, boto, libxml2 2013-08-11 11:00:23-0500 [scrapy] DEBUG: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'SPIDER_MODULES': ['tutorial.spiders'], 'USER_AGENT': 'tutorial/1.0', 'BOT_NAME': 'tutorial'} 2013-08-11 11:00:23-0500 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState 2013-08-11 11:00:23-0500 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats 2013-08-11 11:00:23-0500 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware 2013-08-11 11:00:23-0500 [scrapy] DEBUG: Enabled item pipelines: 2013-08-11 11:00:23-0500 [dmoz] INFO: Spider opened 2013-08-11 11:00:23-0500 [dmoz] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2013-08-11 11:00:23-0500 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023 2013-08-11 11:00:23-0500 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080 2013-08-11 11:00:24-0500 [dmoz] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> (referer: None) 2013-08-11 11:00:24-0500 [dmoz] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/> (referer: None) 2013-08-11 11:00:24-0500 [dmoz] INFO: Closing spider (finished) 2013-08-11 11:00:24-0500 [dmoz] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 486, 'downloader/request_count': 2, 'downloader/request_method_count/GET': 2, 'downloader/response_bytes': 12980, 'downloader/response_count': 2, 'downloader/response_status_count/200': 2, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2013, 8, 11, 16, 0, 24, 101947), 'log_count/DEBUG': 10, 'log_count/INFO': 4, 'response_received_count': 2, 'scheduler/dequeued': 2, 'scheduler/dequeued/memory': 2, 'scheduler/enqueued': 2, 'scheduler/enqueued/memory': 2, 'start_time': datetime.datetime(2013, 8, 11, 16, 0, 23, 408890)} 2013-08-11 11:00:24-0500 [dmoz] INFO: Spider closed (finished) 
+2
source

If the solutions above do not work, then:

open settings.py in the tutorial folder and make the following changes

BOT_NAME = 'dmoz'

Change the name BOT_NAME to 'tutorial' to the one you explicitly specified in your dmoz_spider.py file.

+1
source

Do you work at Virtualenv? If yes, please do pip freeze and show us if you have all scrapy dependencies installed

The code is fine, I just copied the code and ran it without any problems. In addition, you should be able to run the spider from any folder of the project folders.

0
source

Please make sure dmoz_spider.py is in the spiders folder

mv dmoz_spider.py spiders /.

0
source

Source: https://habr.com/ru/post/1238467/


All Articles