Parallel implementation for loops in Python

I have a Python program that looks like this:

total_error = []
for i in range(24):
    error = some_function_call(parameters1, parameters2)
    total_error += error

The 'some_function_call' function takes a lot of time, and I cannot find an easy way to reduce the time complexity of the function. Is there a way to reduce the execution time when doing parallel tasks and then add them to total_error. I tried using the pool and joblib but could not use it successfully.

+4
source share
2 answers

You can also use concurrent.futuresin Python 3, which is a simpler interface than multiprocessing. See here for more details on the differences .

from concurrent import futures

total_error = 0

with futures.ProcessPoolExecutor() as pool:
  for error in pool.map(some_function_call, parameters1, parameters2):
    total_error += error

parameters1 parameters2 , , (24 ).

paramters<1,2> /, 24 , , .

class TotalError:
    def __init__(self):
        self.value = 0

    def __call__(self, r):
        self.value += r.result()

total_error = TotalError()
with futures.ProcessPoolExecutor() as pool:
  for i in range(24):
    future_result = pool.submit(some_function_call, parameters1, parameters2)
    future_result.add_done_callback(total_error)

print(total_error.value)
+1

python multiprocessing:

from multiprocessing import Pool, freeze_support, cpu_count
import os

all_args = [(parameters1, parameters2) for i in range(24)]
#call freeze_support() if in windows
if os.name == "nt":
    freeze_support()
pool = Pool(cpu_count()) #you can use whatever, but your machine core number usually is a good choice (altough maybe not the better)

def wrapped_some_function_call(args): 
    """
    we need to wrap the call to unpack the parameters 
    we build before as a tuple for being able to use pool.map
    """ 
    sume_function_call(*args) 

results = pool.map(wrapped_some_function_call, all_args)
total_error = sum(results)
+2

Source: https://habr.com/ru/post/1691699/


All Articles