Why is Eigen Cholesky decomposition much faster on Linux than on Windows?

I noticed a significant performance difference regarding Cholesky decomposition using the Eigen library.

I am using the latest version of Eigen (3.2.1) with the following reference code:

#include <iostream>
#include <chrono>
#include <Eigen/Core>
#include <Eigen/Cholesky>
using namespace std;
using namespace std::chrono;
using namespace Eigen;

int main()
{
    const MatrixXd::Index size = 4200;
    MatrixXd m = MatrixXd::Random(size, size);
    m = (m + m.transpose()) / 2.0 + 10000 * MatrixXd::Identity(size, size);

    LLT<MatrixXd> llt;
    auto start = high_resolution_clock::now();
    llt.compute(m);
    if (llt.info() != Success)
        cout << "Cholesky decomposition failed!" << endl;
    auto stop = high_resolution_clock::now();

    cout << "Cholesky decomposition in "
         << duration_cast<milliseconds>(stop - start).count()
         << " ms." << endl;

    return 0;
}

I compile this test with g++ -std=c++11 -Wall -O3 -o bench bench.ccand run it on Windows for the first time (using MinGW, [edit: GCC 4.8.1]) and Linux (change: GCC 4.8.1) a second time, but both times on the same machine.

On Windows, this gives me:

Cholesky decomposition in 10114 ms.

But on Linux I get:

Cholesky decomposition in 3258 ms.

This is less than a third of the time required for Windows.

Is there anything on Linux systems that Eigen uses to achieve this acceleration?
And if so, how can I do the same in Windows?

+4
2

, 64- . , SSE2 (-msse2), - , 64- , SSE.

+4

. Eigen .

Eigen :

GCC, 4.1 . GCC 4.2 .

MSVC (Visual Studio), 2008 ( 2.x Eigen MSVC 2005, ).

Intel ++. .

LLVM/CLang++ (2.8 ).

MinGW, . GCC 4.

QNX QCC .

gcc ( >= 4.2), MinGW ...

, MinGW, "", :

Eigen ++ 98 . , , .

, , gcc , MinGW , , .

, , , , ...

+2

Source: https://habr.com/ru/post/1539618/


All Articles