From what I understand, numpy arrays can process operations faster than python lists, because they are processed in parallel rather than iterative order. I tried to test it out for fun, but I did not see much of a difference.
Was there something wrong with my test? Does the difference only matter with arrays much larger than the ones I used? I tried to create a python list and a numpy array in each function to undo the differences creating one and the other, but the time delta really seems insignificant. Here is my code:
My final outputs were numpy function: 6.534756324786595s, list function: 6.559365831783256s
import timeit
import numpy as np
a_setup = 'import timeit; import numpy as np'
std_fx = '''
def operate_on_std_array():
std_arr = list(range(0,1000000))
np_arr = np.asarray(std_arr)
for index,elem in enumerate(std_arr):
std_arr[index] = (elem**20)*63134
return std_arr
'''
parallel_fx = '''
def operate_on_np_arr():
std_arr = list(range(0,1000000))
np_arr = np.asarray(std_arr)
np_arr = (np_arr**20)*63134
return np_arr
'''
def operate_on_std_array():
std_arr = list(range(0,1000000))
np_arr = np.asarray(std_arr)
for index,elem in enumerate(std_arr):
std_arr[index] = (elem**20)*63134
return std_arr
def operate_on_np_arr():
std_arr = list(range(0,1000000))
np_arr = np.asarray(std_arr)
np_arr = (np_arr**20)*63134
return np_arr
print('std',timeit.timeit(setup = a_setup, stmt = std_fx, number = 80000000))
print('par',timeit.timeit(setup = a_setup, stmt = parallel_fx, number = 80000000))
source
share