Programmatically correct fisheye distortion

UPDATE BOUNTY STATUS:

I discovered how to map a linear lens , from destination coordinates to source coordinates.

How do you calculate the radial distance from the center to the transition from fisheye to straight?

  • one). I'm really trying to flip it and map the source coordinates to the destination coordinates. What is the opposite in the code in the style of the converted functions that I posted? SIhfE.jpg

  • 2). I also see that my distortion is imperfect on some lenses - presumably those that are not strictly linear. What is the equivalence of the source and destination coordinates for these lenses? Again, more code than just mathematical formulas, please ... ickIZ.jpg




The question is, as originally indicated:

I have some points that describe the position in a picture taken with a fisheye lens.

I want to convert these points to rectilinear coordinates. I want to distort the image.

I found this description on how to create a fisheye effect, but not how to flip it.

There is also a blog post that describes how to use the tools for this; these pictures:

(1) : source Original photo link
qLNQV.jpg
Input: Original fish-eye distortion image for correction.

(2) : destination Original photo link
ogb9b.jpg
Exit: Corrected image (technically also with perspective correction, but this is a separate step).

How do you calculate the radial distance from the center to go from fisheye to straight?

My stub looks like this:

 Point correct_fisheye(const Point& p,const Size& img) { // to polar const Point centre = {img.width/2,img.height/2}; const Point rel = {px-centre.x,py-centre.y}; const double theta = atan2(rel.y,rel.x); double R = sqrt((rel.x*rel.x)+(rel.y*rel.y)); // fisheye undistortion in here please //... change R ... // back to rectangular const Point ret = Point(centre.x+R*cos(theta),centre.y+R*sin(theta)); fprintf(stderr,"(%d,%d) in (%d,%d) = %f,%f = (%d,%d)\n",px,py,img.width,img.height,theta,R,ret.x,ret.y); return ret; } 

Alternatively, I could somehow convert the image from a fisheye to a straight line before finding the points, but I am completely fooled by the OpenCV documentation . Is there an easy way to do this in OpenCV and it works well enough to do this with live video tape?

+48
math geometry graphics projection
Mar 19 '10 at 13:50
source share
6 answers

The description below indicates that the projection using a camera with a hole (which does not distort the lens) is modeled

 R_u = f*tan(theta) 

and projection using conventional lenses with optical lenses (i.e. distorted) is modeled

 R_d = 2*f*sin(theta/2) 

You already know R_d and theta, and if you know the focal length of the camera (represented by f), then correcting the image will mean computing R_u in terms of R_d and theta. In other words,

 R_u = f*tan(2*asin(R_d/(2*f))) 

is the formula you are looking for. An estimate of the focal length f can be solved by calibrating the camera or other means, such as providing the user with feedback on how well the image is corrected or using knowledge from the original scene.

To solve the same problem using OpenCV, you will need to get the camera’s internal parameters and lens distortion factors. See, for example, Chapter 11 OpenCV Training (Remember to Check Correction ). You can then use a program such as this (written using Python bundles for OpenCV) to cancel lens distortion:

 #!/usr/bin/python # ./undistort 0_0000.jpg 1367.451167 1367.451167 0 0 -0.246065 0.193617 -0.002004 -0.002056 import sys import cv def main(argv): if len(argv) < 10: print 'Usage: %s input-file fx fy cx cy k1 k2 p1 p2 output-file' % argv[0] sys.exit(-1) src = argv[1] fx, fy, cx, cy, k1, k2, p1, p2, output = argv[2:] intrinsics = cv.CreateMat(3, 3, cv.CV_64FC1) cv.Zero(intrinsics) intrinsics[0, 0] = float(fx) intrinsics[1, 1] = float(fy) intrinsics[2, 2] = 1.0 intrinsics[0, 2] = float(cx) intrinsics[1, 2] = float(cy) dist_coeffs = cv.CreateMat(1, 4, cv.CV_64FC1) cv.Zero(dist_coeffs) dist_coeffs[0, 0] = float(k1) dist_coeffs[0, 1] = float(k2) dist_coeffs[0, 2] = float(p1) dist_coeffs[0, 3] = float(p2) src = cv.LoadImage(src) dst = cv.CreateImage(cv.GetSize(src), src.depth, src.nChannels) mapx = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1) mapy = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1) cv.InitUndistortMap(intrinsics, dist_coeffs, mapx, mapy) cv.Remap(src, dst, mapx, mapy, cv.CV_INTER_LINEAR + cv.CV_WARP_FILL_OUTLIERS, cv.ScalarAll(0)) # cv.Undistort2(src, dst, intrinsics, dist_coeffs) cv.SaveImage(output, dst) if __name__ == '__main__': main(sys.argv) 

Also note that OpenCV uses a completely different lens distortion model to the one on your web page.

+28
Mar 21 '10 at 14:13
source share

(Original poster providing an alternative)

The following function displays the coordinates of the recipient (rectilinear) in the original (distorted by distortion) coordinates. (I would appreciate help in handling it)

I got to this through a trial error: I don’t understand why this code works, explanations and increased accuracy are evaluated !

 def dist(x,y): return sqrt(x*x+y*y) def correct_fisheye(src_size,dest_size,dx,dy,factor): """ returns a tuple of source coordinates (sx,sy) (note: values can be out of range)""" # convert dx,dy to relative coordinates rx, ry = dx-(dest_size[0]/2), dy-(dest_size[1]/2) # calc theta r = dist(rx,ry)/(dist(src_size[0],src_size[1])/factor) if 0==r: theta = 1.0 else: theta = atan(r)/r # back to absolute coordinates sx, sy = (src_size[0]/2)+theta*rx, (src_size[1]/2)+theta*ry # done return (int(round(sx)),int(round(sy))) 

When used with a coefficient of 3.0, it successfully distorts images used as examples (I have not made any attempts at high-quality interpolation):

wait - downloading from a free hosting takes a lot of time! http://www.freeimagehosting.net/uploads/9c3fd2e82e.jpg

(And this is from a blog post, for comparison :)

Using Panotools http://photo.net/learn/fisheye/fe06.jpg

+7
Mar 23 '10 at 17:44
source share

If you think your formulas are accurate, you can calculate the exact formula using a trigger, for example:

 Rin = 2 f sin(w/2) -> sin(w/2)= Rin/2f Rout= f tan(w) -> tan(w)= Rout/f (Rin/2f)^2 = [sin(w/2)]^2 = (1 - cos(w))/2 -> cos(w) = 1 - 2(Rin/2f)^2 (Rout/f)^2 = [tan(w)]^2 = 1/[cos(w)]^2 - 1 -> (Rout/f)^2 = 1/(1-2[Rin/2f]^2)^2 - 1 

However, as @jmbr says, the actual camera distortion will depend on the lens and magnification. Instead of relying on a fixed formula, you can try polynomial extension:

 Rout = Rin*(1 + A*Rin^2 + B*Rin^4 + ...) 

Replacing A first, then higher order coefficients, you can calculate any reasonable local function (the extension form uses the symmetry of the problem). In particular, it should be possible to calculate the initial coefficients in order to approximate the theoretical function above.

In addition, for good results, you will need to use an interpolation filter to create the corrected image. While the distortion is not too large, you can use the filter that you would use to scale the image linearly without any problems.

Edit: according to your request, the equivalent scale factor for the above formula:

 (Rout/f)^2 = 1/(1-2[Rin/2f]^2)^2 - 1 -> Rout/f = [Rin/f] * sqrt(1-[Rin/f]^2/4)/(1-[Rin/f]^2/2) 

If you build the above formula with tan (Rin / f), you will see that they are very similar in shape. Basically, tangent distortion becomes serious before sin (w) is very different from w.

The inverse formula should look something like this:

 Rin/f = [Rout/f] / sqrt( sqrt(([Rout/f]^2+1) * (sqrt([Rout/f]^2+1) + 1) / 2 ) 
+3
Mar 23 '10 at 8:11
source share

I found this pdf file and I proved that the math is correct (except for the line vd = *xd**fv+v0 which should say vd = **yd**+fv+v0 ).

http://perception.inrialpes.fr/CAVA_Dataset/Site/files/Calibration_OpenCV.pdf

It does not use all the latest odds available by OpenCV, but I am sure that it could be adapted quite easily.

 double k1 = cameraIntrinsic.distortion[0]; double k2 = cameraIntrinsic.distortion[1]; double p1 = cameraIntrinsic.distortion[2]; double p2 = cameraIntrinsic.distortion[3]; double k3 = cameraIntrinsic.distortion[4]; double fu = cameraIntrinsic.focalLength[0]; double fv = cameraIntrinsic.focalLength[1]; double u0 = cameraIntrinsic.principalPoint[0]; double v0 = cameraIntrinsic.principalPoint[1]; double u, v; u = thisPoint->x; // the undistorted point v = thisPoint->y; double x = ( u - u0 )/fu; double y = ( v - v0 )/fv; double r2 = (x*x) + (y*y); double r4 = r2*r2; double cDist = 1 + (k1*r2) + (k2*r4); double xr = x*cDist; double yr = y*cDist; double a1 = 2*x*y; double a2 = r2 + (2*(x*x)); double a3 = r2 + (2*(y*y)); double dx = (a1*p1) + (a2*p2); double dy = (a3*p1) + (a1*p2); double xd = xr + dx; double yd = yr + dy; double ud = (xd*fu) + u0; double vd = (yd*fv) + v0; thisPoint->x = ud; // the distorted point thisPoint->y = vd; 
+3
Aug 16 2018-11-11T00:
source share

I took what JMBR did and basically changed it. He took the radius of the distorted image (Rd, that is, the distance in pixels from the center of the image) and found the formula for Ru, the radius of the undistorted image.

You want to go the other way. For each pixel in an undistorted (processed image), you want to know what corresponds to a pixel in a distorted image. In other words, given (xu, yu) β†’ (xd, yd). Then you replace each pixel in the undistorted image with the corresponding pixel from the distorted image.

Starting with JMBR, I do the opposite, find Rd as a function of Ru. I get:

 Rd = f * sqrt(2) * sqrt( 1 - 1/sqrt(r^2 +1)) 

where f is the focal length in pixels (I will explain later) and r = Ru/f .

The focal length for my camera is 2.5 mm. The size of each pixel on my CCD was 6 microns. f was therefore 2500/6 = 417 pixels. This can be found by trial and error.

The Rd search allows you to find the corresponding pixel in the distorted image using polar coordinates.

The angle of each pixel from the center point is the same:

theta = arctan( (yu-yc)/(xu-xc) ) where xc, yc are the center points.

Then

 xd = Rd * cos(theta) + xc yd = Rd * sin(theta) + yc 

Make sure you know which quadrant you are in.

Here is the C # code I used

  public class Analyzer { private ArrayList mFisheyeCorrect; private int mFELimit = 1500; private double mScaleFESize = 0.9; public Analyzer() { //A lookup table so we don't have to calculate Rdistorted over and over //The values will be multiplied by focal length in pixels to //get the Rdistorted mFisheyeCorrect = new ArrayList(mFELimit); //i corresponds to Rundist/focalLengthInPixels * 1000 (to get integers) for (int i = 0; i < mFELimit; i++) { double result = Math.Sqrt(1 - 1 / Math.Sqrt(1.0 + (double)i * i / 1000000.0)) * 1.4142136; mFisheyeCorrect.Add(result); } } public Bitmap RemoveFisheye(ref Bitmap aImage, double aFocalLinPixels) { Bitmap correctedImage = new Bitmap(aImage.Width, aImage.Height); //The center points of the image double xc = aImage.Width / 2.0; double yc = aImage.Height / 2.0; Boolean xpos, ypos; //Move through the pixels in the corrected image; //set to corresponding pixels in distorted image for (int i = 0; i < correctedImage.Width; i++) { for (int j = 0; j < correctedImage.Height; j++) { //which quadrant are we in? xpos = i > xc; ypos = j > yc; //Find the distance from the center double xdif = i-xc; double ydif = j-yc; //The distance squared double Rusquare = xdif * xdif + ydif * ydif; //the angle from the center double theta = Math.Atan2(ydif, xdif); //find index for lookup table int index = (int)(Math.Sqrt(Rusquare) / aFocalLinPixels * 1000); if (index >= mFELimit) index = mFELimit - 1; //calculated Rdistorted double Rd = aFocalLinPixels * (double)mFisheyeCorrect[index] /mScaleFESize; //calculate x and y distances double xdelta = Math.Abs(Rd*Math.Cos(theta)); double ydelta = Math.Abs(Rd * Math.Sin(theta)); //convert to pixel coordinates int xd = (int)(xc + (xpos ? xdelta : -xdelta)); int yd = (int)(yc + (ypos ? ydelta : -ydelta)); xd = Math.Max(0, Math.Min(xd, aImage.Width-1)); yd = Math.Max(0, Math.Min(yd, aImage.Height-1)); //set the corrected pixel value from the distorted image correctedImage.SetPixel(i, j, aImage.GetPixel(xd, yd)); } } return correctedImage; } } 
+3
Jun 06 '13 at 23:41
source share

I blindly implemented the formulas from here , so I cannot guarantee that he will do what you need.

Use auto_zoom to get the value for the zoom parameter.

 def dist(x,y): return sqrt(x*x+y*y) def fisheye_to_rectilinear(src_size,dest_size,sx,sy,crop_factor,zoom): """ returns a tuple of dest coordinates (dx,dy) (note: values can be out of range) crop_factor is ratio of sphere diameter to diagonal of the source image""" # convert sx,sy to relative coordinates rx, ry = sx-(src_size[0]/2), sy-(src_size[1]/2) r = dist(rx,ry) # focal distance = radius of the sphere pi = 3.1415926535 f = dist(src_size[0],src_size[1])*factor/pi # calc theta 1) linear mapping (older Nikon) theta = r / f # calc theta 2) nonlinear mapping # theta = asin ( r / ( 2 * f ) ) * 2 # calc new radius nr = tan(theta) * zoom # back to absolute coordinates dx, dy = (dest_size[0]/2)+rx/r*nr, (dest_size[1]/2)+ry/r*nr # done return (int(round(dx)),int(round(dy))) def fisheye_auto_zoom(src_size,dest_size,crop_factor): """ calculate zoom such that left edge of source image matches left edge of dest image """ # Try to see what happens with zoom=1 dx, dy = fisheye_to_rectilinear(src_size, dest_size, 0, src_size[1]/2, crop_factor, 1) # Calculate zoom so the result is what we wanted obtained_r = dest_size[0]/2 - dx required_r = dest_size[0]/2 zoom = required_r / obtained_r return zoom 
+2
Mar 24
source share



All Articles