How not to catch up with the points in the coordinates of the camera frame and get the corresponding undistorted image coordinates?

I use OpenCV to undo a set of points after calibrating the camera. Code follows.

const int npoints = 2; // number of point specified // Points initialization. // Only 2 ponts in this example, in real code they are read from file. float input_points[npoints][2] = {{0,0}, {2560, 1920}}; CvMat * src = cvCreateMat(1, npoints, CV_32FC2); CvMat * dst = cvCreateMat(1, npoints, CV_32FC2); // fill src matrix float * src_ptr = (float*)src->data.ptr; for (int pi = 0; pi < npoints; ++pi) { for (int ci = 0; ci < 2; ++ci) { *(src_ptr + pi * 2 + ci) = input_points[pi][ci]; } } cvUndistortPoints(src, dst, &camera1, &distCoeffs1); 

After the code above dst contains the following numbers:

 -8.82689655e-001 -7.05507338e-001 4.16228324e-001 3.04863811e-001 

which are too small compared to the numbers in src .

At the same time, if I do not distort the image on call:

 cvUndistort2( srcImage, dstImage, &camera1, &dist_coeffs1 ); 

I get a good undistorted image, which means that the pixel coordinates do not change so dramatically compared to individual points.

How to get the same distortion for certain points as for images? Thanks.

+6
source share
2 answers

The points must be "abnormalized" using the camera matrix.

More specifically, after calling cvUndistortPoints , the following transformation should also be added:

 double fx = CV_MAT_ELEM(camera1, double, 0, 0); double fy = CV_MAT_ELEM(camera1, double, 1, 1); double cx = CV_MAT_ELEM(camera1, double, 0, 2); double cy = CV_MAT_ELEM(camera1, double, 1, 2); float * dst_ptr = (float*)dst->data.ptr; for (int pi = 0; pi < npoints; ++pi) { float& px = *(dst_ptr + pi * 2); float& py = *(dst_ptr + pi * 2 + 1); // perform transformation. // In fact this is equivalent to multiplication to camera matrix px = px * fx + cx; py = py * fy + cy; } 

Additional Information on the OpenCV Camera Matrix 'Camera Calibration and 3D Reconstruction

UPDATE:

After calling the C ++ function, the function should also work:

 std::vector<cv::Point2f> inputDistortedPoints = ... std::vector<cv::Point2f> outputUndistortedPoints; cv::Mat cameraMatrix = ... cv::Mat distCoeffs = ... cv::undistortPoints(inputDistortedPoints, outputUndistortedPoints, cameraMatrix, distCoeffs, cv::noArray(), cameraMatrix); 
+10
source

This may be the size of your matrix :)

OpenCV expects a point vector - a column or matrix of rows with two channels. But since your input matrix is ​​only 2 points, and the number of channels is 1, it cannot determine what an input, row or colum is.

So, fill in a long input mat with dummy values ​​and save only the first:

 const int npoints = 4; // number of point specified // Points initialization. // Only 2 ponts in this example, in real code they are read from file. float input_points[npoints][4] = {{0,0}, {2560, 1920}}; // the rest will be set to 0 CvMat * src = cvCreateMat(1, npoints, CV_32FC2); CvMat * dst = cvCreateMat(1, npoints, CV_32FC2); // fill src matrix float * src_ptr = (float*)src->data.ptr; for (int pi = 0; pi < npoints; ++pi) { for (int ci = 0; ci < 2; ++ci) { *(src_ptr + pi * 2 + ci) = input_points[pi][ci]; } } cvUndistortPoints(src, dst, &camera1, &distCoeffs1); 

EDIT

Although OpenCV indicates that undistortPoints accept only 2-channel input, it actually accepts

  • 1-column, two-channel multi-row mat or (and this case is not documented)
  • 2 columns, multi-row, 1-channel mats or
  • multi-column, 1 row, 2-channel mats

(as seen on undistort.cpp, line 390)

But an error inside (or lack of available information) makes it wrong to mix the second with the third when the number of columns is 2. Thus, your data is considered 2-column, 2-line, 1-channel.

+1
source

Source: https://habr.com/ru/post/903731/


All Articles