How to vectorize the estimate of bilinear and quadratic forms?

Given any nxn matrix of real coefficients A, we can define a bilinear form b A : R n x R n β†’ R by

b A (x, y) = x T Ay,

and a quadratic form q A : R n β†’ R by

q A (x) = b A (x, x) = x T Ax.

(For most common applications of quadratic forms q A, the matrix A is symmetric or even symmetric, positive definite, so feel free to assume that either one of them takes place if it matters to your answer.)

(In addition, FWIW, b I and q I (where I is the identity matrix nxn) are the standard scalar product and the square of the L 2 -norm on R n, i.e., X T y and x T x, respectively.)

Now suppose that I have two nxm matrices: X and Y and an nxn-matrix A. I would like to optimize the calculation as b A (x , i , y , i ) and q A (x , i ) (where x , i and y , i denote the ith column of X and Y, respectively), and I assume that, at least in some environments like numpy, R or Matlab, this will be associated with some form of vectorization.

The only solution I can think of is to create diagonal block matrices [X], [Y] and [A] with dimensions mn xm, mn xm and mn x mn respectively and with (block) diagonal elements x , i , y , i and A, respectively. Then the required calculations would be matrix multiplications [X] T [A] [Y] and [X] T [A] [X]. This strategy is most definitely not capable, but if there is a way to do it that is effective in terms of time and space, I would like to see it. (It goes without saying that any implementation that does not use the sparseness of these matrix matrices would be doomed.)

Is there a better approach?

My system preference for this is numpy, but the answers in terms of some other system that supports efficient matrix calculations such as R or Matlab may also be okay (provided I can figure out how to port them to numpy) .

Thanks!

<sub> Of course, computing the products X T AY and X T AX would compute the desired b A (x , i , y , i ) and q A (x , i ) (as the diagonal elements of the resulting mxm matrices) , along with the O (m 2 ) irrelevant b A (x , i , y , j ) and b A (x , i , x , j ), (for i β‰  j), so this is a non-starter.

+4
source share
3 answers

Here's a numpy solution that should give you what you are looking for:

((np.matrix(X).T*np.matrix(A)).A * YTA).sum(1) 

This makes matrix multiplication for X T * A, then multiplying the array by elements is multiplied by Y T. The rows of the resulting array are then summed to get a 1-dimensional array.

+4
source

It’s not entirely clear what you are trying to achieve, but in R you use crossprod to form cross products: given matrices X and Y with compatible sizes, crossprod(X, Y) returns X T Y. Similarly, matrix multiplication is achieved using the %*% operator %*% : X %*% Y returns the product of XY. This way you can get X T AY as crossprod(X, A %*% Y) without worrying about the mechanism of matrix multiplication, loops, etc.

If your matrices have a certain structure that allows you to optimize calculations (symmetric, triangular, sparse, grouped, ...), you can see the Matrix package, which has some support for this.

I have not used Matlab, but I am sure that it will have similar functions for these operations.

+1
source

If you want to do this in MATLAB, it is very simple:

You can simply enter

 b = x'*A*y; q = x'*A*x; 

I doubt it will be worth the effort, but if you want to speed things up, you can try the following:

 M = x'*A; b = M*y; q = M*x; 
-1
source

Source: https://habr.com/ru/post/1385653/


All Articles