C ++: Is Eigen Conservative Resize Too Expensive?

I have some of my own matrices, the sizes of which I do not know in advance, I have only an upper bound. I have a loop in which I populate these matrices (I initialize them using the upper bound) along the column until the stop criterion is met (say, after j iterations).

Now my problem: after the loop, I need these matrices for matrix multiplications (obviously, using only the first j columns). A direct solution would be to use Eigen conservativeResize and go straight ahead and do matrix multiplication. Since matrices tend to be quite large (100,000+ dimensions) and (as far as I can see, not sure though) Eigen conservativeResize reallocates memory for modified matrices and performs one deep copy, this solution is quite expensive.

I was thinking of writing my own matrix multiplication function that uses old (large) matrices, taking arguments indicating the number of columns to use. I’m afraid that the Eigen matrix multiplication is much optimized, that ultimately this solution is slower than just using conservative resizing and standard native multiplication ...

Should I just bite the bullet and use the conservative rendering method, or does anyone have a better idea? BTW: The matrices we are talking about are used in 3 multiplications and 1 are transposed after the cycle / resizing

Thanks in advance!

Edit: this is the corresponding piece of code (where X is MatrixXd, y is VectorXd, and numComponents is the number of hidden variables PLS1 is supposed to use). The fact is that at the beginning numComponents will always be the number of dimensions in X (X.cols ()), but the stopping criterion should check the relative improvement of the explained variance of the output vector (which I have not yet implemented). If the relative improvement is too small, the algorithm should stop (since we are satisfied with the first j-components), and then calculate the regression coefficients. For this I need a conservative Resize:

using namespace Eigen; MatrixXd W,P,T,B; VectorXd c,xMean; double xMean; W.resize(X.cols(),numComponents); P.resize(X.cols(),numComponents); T.resize(X.rows(),numComponents); c.resize(numComponents); xMean.resize(X.cols()); xMean.setZero(); yMean=0; VectorXd yCopy=y; //perform PLS1 for(size_t j=0; j< numComponents; ++j){ VectorXd tmp=X.transpose()*y; W.col(j)=(tmp)/tmp.norm(); T.col(j)=X*W.col(j); double divisorTmp=T.col(j).transpose()*T.col(j); c(j)=(T.col(j).transpose()*y); c(j)/=divisorTmp; P.col(j)=X.transpose()*T.col(j)/divisorTmp; X=XT.col(j)*P.col(j).transpose(); y=yT.col(j)*c(j); if(/*STOPPINGCRITERION(TODO)*/ && j<numComponents-1){ numComponents=j+1; W.conservativeResize(X.cols(),numComponents); P.conservativeResize(X.cols(),numComponents); T.conservativeResize(X.rows(),numComponents); c.conservativeResize(numComponents); } } //store regression matrix MatrixXd tmp=P.transpose()*W; B=W*tmp.inverse()*c; yCopy=yCopy-T*c; mse=(yCopy.transpose()*yCopy); mse/=y.size();//Mean Square Error 
+1
source share
1 answer

I think you could select a large matrix once, then for multiplication use block to create a representation of your part that will contain meaningful data. Then you can use a large matrix. This will save you redistribution.

The following example fully demonstrates this.

./eigen_block_multiply.cpp:

 #include <Eigen/Dense> #include <iostream> using namespace std; using namespace Eigen; int main() { Matrix<float, 2, 3> small; small << 1,2,3, 4,5,6; Matrix<float, 4, 4> big = Matrix<float, 4, 4>::Constant(0.6); cout << "Big matrix:\n"; cout << big << endl; cout << "Block of big matrix:\n"; cout << big.block(0,0,3,2) << endl; cout << "Small matrix:\n"; cout << small << endl; cout << "Product:\n"; cout << small * big.block(0,0,3,2) << endl; Matrix<float, 3, 3> small2; small2 << 1,2,3, 4,5,6, 7,8,9; big = Matrix<float, 4, 4>::Constant(6.66); cout << "Product2:\n"; cout << small * big.block(0,0,3,3) << endl; } 

Conclusion:

 Big matrix: 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 Block of big matrix: 0.6 0.6 0.6 0.6 0.6 0.6 Small matrix: 1 2 3 4 5 6 Product: 3.6 3.6 9 9 Product2: 39.96 39.96 39.96 99.9 99.9 99.9 
+6
source

Source: https://habr.com/ru/post/1208744/


All Articles