From source code loader
conc = self.ip_concurrency if self.ip_concurrency else self.domain_concurrency
conc, delay = _get_concurrency_delay(conc, spider, self.settings)
So, it looks like the behavior will be the same as this , which says
This setting also affects DOWNLOAD_DELAY: if CONCURRENT_REQUESTS_PER_IP is non-zero, download delay is enforced per IP, not per domain.
Therefore, I do not think that you will achieve a large amount of concurrency with a large download_delay. I ran search robots in a slow network with autorouting, and at the same time there were no more than 2-3 simultaneous requests.
source
share