I need to clear (using scrAPI) 400+ ruby web pages, my actual code is very consistent:
data = urls.map {|url| scraper.scrape url }
Actually, the code is a little different (exception handling, etc.).
How can I do it faster? How can I parallelize downloads?
mykelyk
source
share