Without additional information about what you are doing in particular, itβs hard to say for sure, but a simple stream approach can make sense.
Assuming you have a simple function that processes a single ID:
import requests url_t = "http://localhost:8000/records/%i" def process_id(id): """process a single ID"""
You can expand this into a simple function that processes a series of identifiers:
def process_range(id_range, store=None): """process a number of ids, storing the results in a dict""" if store is None: store = {} for id in id_range: store[id] = process_id(id) return store
and finally, you can quite easily map sub-bands to streams so that the number of simultaneous requests is multiple:
from threading import Thread def threaded_process_range(nthreads, id_range): """process the id range in a specified number of threads""" store = {} threads = []
Full example on an IPython laptop: http://nbviewer.ipython.org/5732094
If your individual tasks take a wider range of time, you can use ThreadPool , which will assign tasks one at a time (often slower if individual tasks are very small, but guarantee a better balance in heterogeneous cases).
minrk source share