I use the Scrapy framework so that spiders crawl through some web pages. Basically, I want to cancel web pages and save them in a database. I have one spider per web page. But I am having trouble launching these spiders right away, so the spider starts scanning exactly after the other spiders finish scanning. How can this be achieved? Is scrapyd a solution?
source share