How to use Python Scrapy module to display all urls from my site?

I want to use the Python Scrapy module to clear all the URLs from my site and write the list to a file. I looked in the examples, but did not see a simple example for this.

+17
python web-crawler scrapy
Mar 05 2018-12-12T00:
source share
2 answers

Here is the python program that worked for me:

from scrapy.selector import HtmlXPathSelector from scrapy.spider import BaseSpider from scrapy.http import Request DOMAIN = 'example.com' URL = 'http://%s' % DOMAIN class MySpider(BaseSpider): name = DOMAIN allowed_domains = [DOMAIN] start_urls = [ URL ] def parse(self, response): hxs = HtmlXPathSelector(response) for url in hxs.select('//a/@href').extract(): if not ( url.startswith('http://') or url.startswith('https://') ): url= URL + url print url yield Request(url, callback=self.parse) 

Save this in a file called spider.py .

Then you can use the shell pipeline to publish this text:

 bash$ scrapy runspider spider.py > urls.out bash$ cat urls.out| grep 'example.com' |sort |uniq |grep -v '#' |grep -v 'mailto' > example.urls 

This gives me a list of all the unique URLs on my site.

+40
Mar 05 2018-12-12T00:
source share
— -

something cleaner (and maybe more useful) will use LinkExtractor

 from scrapy.linkextractors import LinkExtractor def parse(self, response): le = LinkExtractor() # empty for getting everything, check different options on documentation for link in le.extract_links(response): yield Request(link.url, callback=self.parse) 
+12
Nov 02 '15 at 16:03
source share



All Articles