How to calculate the rotation and translation between two cameras?

I know the method of calibrating a chessboard chamber and implemented it.

If I have 2 cameras looking at the same scene and calibrate simultaneously with the checkerboard technique, can I calculate the rotation matrix and the translation vector between them? How?

+4
source share
4 answers

If you have the coordinates of the 3D camera of the corresponding points, you can calculate the optimal rotation matrix and translation vector. Rigid body transformation

+3
source

If you are using OpenCV already, then why not use cv::stereoCalibrate .

Returns rotation and translation matrices. The only thing you need to do is make sure that the calibration chessboard is visible with both cameras.

The exact method is shown in the .cpp examples provided with the OpenCV library (I have version 2.2 and the samples were installed by default in / usr / local / share / opencv / samples).

The sample code is called stereo_calib.cpp. Although he didnโ€™t clearly explain what they were doing there (you might want to take a look at Learning OpenCV for that), this is something you can build on.

+2
source

If you understand correctly, you have two calibrated cameras watching the common scene, and you want to restore their spatial arrangement. This is possible (provided that you find enough matching images), but only to an unknown factor in the translation scale. That is, we can restore rotation (3 degrees of freedom, DOF) and only the direction of translation (2 DOF). This is because we cannot determine whether the projected scene is large, and the cameras are far away, or the scene is small, and the cameras are close. In the literature, the location of 5 DOF is called relative position or relative orientation (Google is your friend). If your measurements are accurate and in general position, 6 point correspondences may be enough to restore a unique solution. A relatively recent algorithm does just that.

Nister D., "Effectively Solving the Five Point Relative Posture Problem," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.26, no.6, pp.756,770, June 2004 doi: 10.1109 / TPAMI.2004.17

+1
source

Update:

Use the structure from the motion control / package, for example the Bundler , to solve simultaneously for the 3D location of the scene and the relative parameters of the camera.

Any such package requires several inputs:

  • calibrate the cameras that you have.
  • 2D pixel locations of points of interest in cameras (use point of interest detection, e.g. Harris, DoG (first part of SIFT)).
  • The correspondence between the points of interest from each camera (use the appropriate descriptor, for example SIFT, SURF, SSD, etc.).

Please note that the decision depends on some ambiguity of the scale. Thus, you need to measure the distance between cameras or between two objects in the scene.

Original answer (relates primarily to uncalibrated cameras, as comments kindly indicate):

This Caltech camera calibration tool contains the ability to resolve and visualize both internal properties (lens parameters, etc.) and external parameters (like camera positions for each photo). The latter interests you.

The Hartley and Sisserman Blue Book is also a great reference. In particular, you might want to take a look at the chapter on epipolar lines and the fundamental matrix, which is free online via the link.

0
source

Source: https://habr.com/ru/post/1346967/


All Articles