I already wrote the following code snippet that does exactly what I want, but it is too slow. I am sure there is a way to do this faster, but I cannot find how to do it. The first part of the code is just to show what form.
two measurement images ( VV1 and HH1 )
pre-calculated values, modeling VV and HH , which both depend on 3 parameters (pre-calculated for values (101, 31, 11) )
index 2 should just put the VV and HH images in the same ndarray, instead of creating two 3darrays
VV1 = numpy.ndarray((54, 43)).flatten() HH1 = numpy.ndarray((54, 43)).flatten() precomp = numpy.ndarray((101, 31, 11, 2))
two of the three parameters that we allow vary
comp = numpy.zeros((len(parameter1), len(parameter2))) for i,(vv,hh) in enumerate(zip(VV1,HH1)): comp0 = numpy.zeros((len(parameter1),len(parameter2))) for j in range(len(parameter1)): for jj in range(len(parameter2)): comp0[j,jj] = numpy.min((vv-precomp[j,jj,:,0])**2+(hh-precomp[j,jj,:,1])**2) comp+=comp0
The obvious thing I know I have to do is get rid of as many for-loops as possible, but I don't know how to get numpy.min to behave correctly when dealing with a lot of dimensions.
The second thing (less important if it can be vectorized, but still interesting), I noticed that it takes mostly CPU time, not RAM, but I searched for a long time, but I canβt find a way to write something like "parfor" instead of "for" in matlab (is it possible to make the @parallel decorator if I just put the for-loop in a separate method?)
edit: in response to Janne Karila: yes, it definitely improves it a lot,
for (vv,hh) in zip(VV1,HH1): comp+= numpy.min((vv-precomp[...,0])**2+(hh-precomp[...,1])**2, axis=2)
Definitely much faster, but is there any way to remove the outer loop too? And is there a way to make a parallel loop using @parallel or something else?