Basically you take each line of GRZVV , adding 1 to the end, multiplying it by GinvVV , and then adding all the elements in the vector. If you didn’t do “Addendum 1,” you could do it all without loops like:
VVg = np.sum(np.dot(GinvVV[:, :-1], GRZVV.T), axis=-1) * VV
or even:
VVg = np.einsum('ij,kj->k', GinvVV[:, :-1], GRZVV) * VV
How do we handle this extra 1? Well, the resulting vector, starting from the matrix multiplication, will increase by the corresponding value in GinvVV[:, -1] , and when you add them, the value will be increased by np.sum(GinvVV[:, -1]) . Therefore, we can simply calculate this once and add it to all elements of the return vector:
VVg = (np.einsum('ij,kj->k', GinvVV[:-1, :-1], GRZVV) + np.sum(GinvVV[:-1, -1])) * VV
The above code works if VV is a scalar. If it is an array of the form (n,) , then the following will work:
GinvVV = np.asarray(GinvVV) VVgbis = (np.einsum('ij,kj->k', GinvVV[:-1, :-1]*VV[:, None], GRZVV) + np.dot(GinvVV[:-1, -1], VV))