I implemented the Gauss-Newton optimization process, which includes the calculation of the increment by solving a linearized system Hx = b. The matrix is Hcalculated using H = J.transpose() * W * J, and bcalculated from b = J.transpose() * (W * e), where eis the error vector. The Jacobian here is an n-by-6 matrix, where n is in thousands and remains unchanged in iterations, and Wis an n-in-n diagonal matrix matrix that will change in iterations (some diagonal elements will be set to zero). However, I ran into a speed issue.
When I do not add weight matrix W, namely, H = J.transpose()*Jand b = J.transpose()*emy Gauss-Newton process can work very quickly at 0.02 seconds for 30 iterations. However, when I add a matrix Wthat is defined outside the iteration loop, it becomes so slow (0.3 ~ 0.7 s for 30 iterations), and I donβt understand if this is my encoding problem, or usually it takes a lot of time.
Everything here is eigenmatrices and vectors.
I defined my matrix Wusing the function .asDiagonal()in the Eigen library from the inverse dispersion vector. then just used it to calculate Had b. Then it becomes very slow. I want to talk about the potential causes of this huge decline.
EDIT:
. . vec.asDiagonal(), , , .
, , - . :
for (int iter=0; iter<max_iter; ++iter) {
error = ...
Eigen::MatrixXf H = J.transpose() * J;
Eigen::VectorXf b = J.transpose() * error;
Eigen::MatrixXf H = J.transpose() * weight_ * J;
Eigen::VectorXf b = J.transpose() * (weight_ * error);
del = H.ldlt().solve(b);
T <- T(del)
}
, , , .