I am trying to get the OpenCV function to work triangulatePoints. I use a point-match function generated from an optical stream. I use two consecutive frames / positions from one moving camera.
At the moment, these are my steps:
The intrinsic properties are given and look as you would expect:
2.6551e+003 0. 1.0379e+003
0. 2.6608e+003 5.5033e+002
0. 0. 1.
Then I calculate two external matrices ([R | t]) based on (with high accuracy) GPS and camera position relative to GPS. Please note that GPS data uses a Cartesian coordinate system around the Netherlands, which uses meters as units of measurement (therefore no strange mathematical latitude is required). This gives the following matrices:

, :
projectionMat = intrinsics * extrinsics;
:

,
(0, 0)...(1080, 1920)
+ .
(0 + flowY0, 0 + flowX0)...(1080 + flowYN, 1920 + flowXN)
, ( , OpenCV) triangulatePoints:
cv::triangulatePoints(projectionMat1, projectionMat2, imagePoints1, imagePoints2, outputPoints);
, outputPoints , (w) .
:

, ( , X/Y/Z, X/Y/Z, ...), . , , 0,01. :

( ), , .
, - -, . ? ?
, , GPS , , . , .