Calculation of rotation and translation matrices between two odometer positions for monocular linear triangulation

Recently, I tried to implement a system for determining and triangulating the three-dimensional position of an object in a robotic system. The general scheme of the process is as follows:

  • Identify the object using SURF matching from a set of “training” images into a real channel of direct transmission from the camera; it also gives me a cropped rectangular image containing only the identified object
  • Move / rotate the robot by a certain amount.
  • Map functions from the object in the original view to the camera in this view
  • Now I have a set of corresponding 2D points (the same object from two different types), two odometer locations (position + orientation) in the world frame, and internal camera properties (focal length, main point, etc.), since it was pre-calibrated, so I would have to create two projection matrices and triangulation using the basic method of linear triangulation, as in the book of Hartley and Sisserman's “Multiple Vision Geometry”, p. 312.
  • Solve the equation AX = 0 for each of the corresponding two-dimensional points, then take the average value

In practice, triangulation only works when there is practically no change in rotation; if the robot even rotates slightly during movement (due to, for example, wheel slippage), then the score is deleted. This also applies to modeling. Since I can only publish two hyperlinks, here is a link to a page with images from the simulation (on the map, the red square is the simulated position of the robot and orientation, and the yellow square is the estimated position of the object using linear triangulation.)

So, you can see that the score is discarded even by a small rotation, as in position 2 on this page (it was 15 degrees if I rotate it more than the score is completely removed from the map), even in a simulated environment where the ideal calibration matrix is ​​known . In a real environment, when I really move around with a robot, this is worse.

There are no problems with obtaining point correspondence even with the actual solution of the equation AX = 0 after calculating the matrix A, so I believe that this is probably due to the way I set the two projection matrices of the camera, in particular, how I calculate the translation matrices and rotations from the position / orientation information that I have regarding the world frame. How am I doing it now:

  • The rotation matrix consists of creating a 1x3 matrix [0, (changing the orientation angle), 0], and then converting it to 3x3 using the OpenCV Rodrigues function.
  • The translation matrix consists of rotating two points (the starting angle) and then subtracting the final position from the starting position to obtain direct and lateral movement of the robot relative to its initial orientation.

As a result, in the first matrix of the projection K [I | 0], and the second - K [R | T], with R and T calculated as described above.

Is there something I'm doing wrong here? Or could it be some other problem? Any help would be greatly appreciated.

+4
source share
1 answer

Without Hartley and Sisserman here, I cannot fully understand this process, but the likelihood that the problem is really caused by the components of the projection matrices. Keep in mind that due to the matrix being 3x4, rotation is applied before translation. This means that you first rotate the camera axis to the final orientation, and then using the translation vector that you created, "rotating two points (start angle)." I would try to make a translation vector by turning the points to a finite angle, and then subtract their position and see if this solves all your problems. From the summary you submitted, it seems to me that you are using the translation vector defined in the wrong volumes.

-1
source

Source: https://habr.com/ru/post/1404024/


All Articles