Up to the scale of three-dimensional triangulation using epipolar geometry

I am currently working on a project in which I have to evaluate the 3D coordinates of the 2D points of interest detected with a monocular camera.

To be more precise, I have an image sequence input (calibrated), and when I get a new image, I need triangular points between the left (previous) image and the right current to get 3D points.

To do this, I follow these steps:

  • Extract key points in the current image
  • Matching the current and previous images
  • Calculation of the base matrix E using RANSAC and the point height algorithm
  • Extracting the transformation matrix R and the translation vector T from E
  • Calculation of 3D points using triangulation using orthogonal regression

The resulting 3D points are incorrect when I reprogram them in the images. But, I read that triangulated points are determined only to an indefinite scale factor.

So my question is: What does “to scale” mean in this context? And what is the decision to get real 3D points in the frame of the scene world coordinates?

I would be grateful for any help!

+4
source share
2 answers

, . , . , , .

" " , , , . , , , 3d- .

+1

Structure from Motion, Point Cloud Matrx/Essential Matrix, . , , Matrx/ ,

x1 ^ t * F * x2 = 0.

, F. , .

, , - , . 2D 3D- . PerspectivenPoint Camera Pose Estimation (PnP). OpenCV .

:

, 3D-

0

Source: https://habr.com/ru/post/1541425/


All Articles