Estimating camera position using opencv C ++, SolvePnP function

I am trying to measure the position of the camera, and I did the following.

  • Mark the world 3-D (suppose that z = 0, since it is flat) indicates the angles of the square on a flat surface and takes the world coordinate system. (in cms)

Take the top left corner of the square as my beginning and set the world points in the following order (x, y) or (col, row): (0,0), (12,8,0), (12,8, 12,8 ), (0,12,8) - in cms

  1. Detection of these points on my image (in pixels) Image points and world points are in the same order.

  2. I calibrated my camera for the internal matrix and distortion factors .

  3. I use the SolvePnP function to get rvec and tvec.

  4. I use the Rodrigues function to get the rotation matrix.

  5. To check the correctness of rvec and tvec, I return 3D points (z = 0) using ProjectPoints to the image plane, and I correctly get points on my image with an error of 3 pixels on the X axis.

  6. Now I continue to calculate the position of my camera in the world frame using the formula:

cam_worl_pos = - inverse (R) * tvec . (This formula I tested on many blogs, and that makes sense)

  1. But my cam_worl_pos x, y and z in cms do not seem to be correct.

, rvec tvec (3 X Y, , ), .

, SolvPnP rvec tvec, , , .

rvec tvec SolvPnp , rvec tvec .

:

- 720 () * 1280 (col)

cameraMatrix_Front=[908.65   0     642.88
                     0     909.28   364.95
                     0        0        1]

distCoeffs_Front=[-0.4589, 0.09462, -1.46*10^-3, 1.23*10^-3]

OpenCV ++:

vector<Point3f> front_object_pts;
Mat rvec_front;
Mat tvec_front;
Mat rotation_front;
Mat world_position_front_cam;


//Fill front object points(x-y-z order in cms)
//It is square of side 12.8cms on Z=0 plane
front_object_pts.push_back(Point3f(0, 0, 0));
front_object_pts.push_back(Point3f(12.8, 0, 0));
front_object_pts.push_back(Point3f(12.8,12.8,0));
front_object_pts.push_back(Point3f(0, 12.8, 0));


//Corresponding Image points detected in the same order as object points
front_image_pts.push_back(points_front[0]);
front_image_pts.push_back(points_front[1]);
front_image_pts.push_back(points_front[2]);
front_image_pts.push_back(points_front[3]);

//Detected points in image matching the 3-D points in the same order
//(467,368)
//(512,369)
//(456,417)
//(391,416)

//Get rvec and tvec using Solve PnP
solvePnP(front_object_pts, front_image_pts, cameraMatrix_Front,
         Mat(4,1,CV_64FC1,Scalar(0)), rvec_front, tvec_front, false, CV_ITERATIVE);

//Output of SolvePnP
//tvec=[-26.951,0.6041,134.72]  (3 x 1 matrix)
//rvec=[-1.0053,0.6691,0.3752]  (3 x 1 matrix)


//Check rvec and tvec is correct or not by projecting the 3-D object points to image
vector<Point2f>check_front_image_pts
projectPoints(front_object_pts, rvec_front, tvec_front, 
             cameraMatrix_Front, distCoeffs_Front, check_front_image_pts);


//Here to note that I have made **distCoefficents**, 
//a 0 vector since my   image points are detected after radial distortion is removed

//Get rotation matrix
Rodrigues(rvec_front, rotation_front);

//Get rotation matrix inverse
Mat rotation_inverse;
transpose(rotation_front, rotation_inverse);

//Get camera position in world cordinates
world_position_front_cam = -rotation_inverse * tvec_front;

// ( )

= 47

= 18

Z = 25

//

= 110

= 71

Z = -40cm

+4

Source: https://habr.com/ru/post/1682504/


All Articles