What is the lens distortion model of the Tango project?

The Project Tango C API documentation says that the distortion of the TANGO_CALIBRATION_POLYNOMIAL_3_PARAMETERS lens TANGO_CALIBRATION_POLYNOMIAL_3_PARAMETERS modeled as:

x_corr_px = x_px (1 + k1 * r2 + k2 * r4 + k3 * r6) y_corr_px = y_px (1 + k1 * r2 + k2 * r4 + k3 * r6)

That is, undistorted coordinates are a power function of distorted coordinates. There is another definition in the Java API , but this description is not detailed enough to indicate in which direction the map is displayed.

I had a lot of problems in registering things correctly, and I suspect that the display may indeed go in the opposite direction, that is, the distorted coordinates are a power series of undistorted coordinates. If the camera was calibrated using OpenCV, the cause of the problem may be that the OpenCV documentation is self-contradictory. The easiest description to find and understand is the OpenCV Camera Calibration Guide , which is consistent with Project Tango docs:

enter image description here

But, on the other hand, the OpenCV API documentation indicates that the mapping happens differently:

enter image description here

My experiments with OpenCV show that its API documentation looks right and the tutorial is wrong. A positive k1 (with all other distortion parameters set to zero) means pincushion distortion, and a negative k1 means barrel distortion. This is consistent with what Wikipedia says about the Brown Conradi model and will be the opposite of the Tsai Model . Note that the distortion can be modeled anyway depending on what makes the math more convenient. I discovered an error against OpenCV for this mismatch.

So my question is: is the Project Tango lens distortion model the same as in OpenCV (despite the documentation)?

Here is the image that I captured from a color camera (apparently, slightly tinted):

enter image description here

And here is the camera calibration reported by the Tango service:

 distortion = {double[5]@3402} [0] = 0.23019999265670776 [1] = -0.6723999977111816 [2] = 0.6520439982414246 [3] = 0.0 [4] = 0.0 calibrationType = 3 cx = 638.603 cy = 354.906 fx = 1043.08 fy = 1043.1 cameraId = 0 height = 720 width = 1280 

Here, how inconvenient with OpenCV in python:

 >>> import cv2 >>> src = cv2.imread('tango00042.png') >>> d = numpy.array([0.2302, -0.6724, 0, 0, 0.652044]) >>> m = numpy.array([[1043.08, 0, 638.603], [0, 1043.1, 354.906], [0, 0, 1]]) >>> h,w = src.shape[:2] >>> mDst, roi = cv2.getOptimalNewCameraMatrix(m, d, (w,h), 1, (w,h)) >>> dst = cv2.undistort(src, m, d, None, mDst) >>> cv2.imwrite('foo.png', dst) 

And this produces this, which may be slightly corrected at the top edge, but much better than my attempts with the reverse model:

enter image description here

+6
source share
2 answers

Tango C-API Docs claims that (x_corr_px, y_corr_px) is a "fixed output position." This adjusted output position needs to be scaled using the focal length and center offset to match the distorted pixel coordinates.

So, to project a point on an image, you need:

  • Convert a 3D point so that it is in the camera frame
  • Convert point to normalized image coordinates (x, y)
  • Calculate r2, r4, r6 for normalized image coordinates ( r2 = x*x + y*y )
  • Calculate (x_corr_px, y_corr_px) based on the above equations:

     x_corr_px = x (1 + k1 * r2 + k2 * r4 + k3 * r6) y_corr_px = y (1 + k1 * r2 + k2 * r4 + k3 * r6) 
  • Calculate distorted coordinates

     x_dist_px = x_corr_px * fx + cx y_dist_px = y_corr_px * fy + cy 
  • Draw (x_dist_px, y_dist_px) in the source buffer with a distorted image.

It also means that the adjusted coordinates are normalized coordinates, scaled by a power series in the norm of normalized image coordinates. (this is the opposite of what the question suggests)

Looking at the implementation of cvProjectPoints2 in OpenCV ( see [opencv] /modules/calib3d/src/calibration.cpp ), the distortion of "Poly3" in OpenCV is applied in the same direction as in Tango. All 3 versions (Tango Docs, OpenCV Tutorials, OpenCV API) are consistent and correct.

Good luck and hopefully it helps!

(Update: Having examined the code more carefully, it looks like the corrected coordinates and the distorted coordinates do not match. I deleted the wrong parts of my answer, and the rest of this answer is still correct.)

+6
source

This may not be the place to post, but I really want to share the read version of the code used by OpenCV to actually fix the distortion.

I am sure that I am not the only one who needs x_corrected and y_corrected and cannot find an easy and understandable formula.

I have rewritten the essential part of cv2.undistortPoints in Python, and you may notice that the correction is iterative . This is important because the solution for the 9th degree polynomial does not exist, and all we can do is apply its revered version several times to get a numerical solution.

 def myUndistortPoint((x0, y0), CM, DC): [[k1, k2, p1, p2, k3, k4, k5, k6]] = DC fx, _, cx = CM[0] _, fy, cy = CM[1] x = x_src = (x0 - cx) / fx y = y_src = (y0 - cy) / fy for _ in range(5): r2 = x**2 + y**2 r4 = r2**2 r6 = r2 * r4 rad_dist = (1 + k4*r2 + k5*r4 + k6*r6) / (1 + k1*r2 + k2*r4 + k3*r6) tang_dist_x = 2*p1 * x*y + p2*(r2 + 2*x**2) tang_dist_y = 2*p2 * x*y + p1*(r2 + 2*y**2) x = (x_src - tang_dist_x) * rad_dist y = (y_src - tang_dist_y) * rad_dist x = x * fx + cx y = y * fy + cy return x, y 

To speed things up, you can use only three iterations, on most cameras this will give sufficient accuracy to fit the pixels.

0
source

Source: https://habr.com/ru/post/986015/


All Articles