OpenCV: how to provide a matrix for "inconvenience" if I know the lens correction coefficient in Gimp?

I get images from an IP camera with a strong fisheye effect. I found that in Gimp I can get the lines mostly straight by applying a Lens Distortion filter with a β€œbasic” value of -30 (all other parameters remain equal to zero).

Now I need to do this ad-hoc using OpenCV. I realized that the undistort function in imgproc would be correct for the call. But how do I create the right matrix and distortion matrix? I see that there is a calibrateCamera function, but it looks like you need a PhD in computer vision or so to use it. I have no idea. Since I know one parameter, should there be an easy way to translate it into a matrix expected to be "indecisive"?

Note: I only need the radial distortion coefficients, I'm not interested in tangential distortions.

0
source share
2 answers

There is a sample provided by opencv for calibration . To do this, all you need is a list of chessboard images (about 20 should be fine). made by your desired camera. It will provide you with all the necessary parameters (distortion factors, internal parameters, etc.). Then you can use the "undistort" function of opencv to fix your image. You need to change the default.xml file (or you can create your own .xml) the name of the xml file containing the address of your images, the number of internal squares and their measurement in the real world.

tadaa you have the necessary parameters :-)

+1
source

For those who wonder where this calibration tool comes from. It seems to need to build it from the source. This is what I did on Linux:

 git clone https://github.com/opencv/opencv.git cd opencv git checkout -b 3.1.0 3.1.0 # make sure we build that version mkdir build cd build cmake -D CMAKE_BUILD_TYPE=Release -D BUILD_EXAMPLES=ON .. make -j4 

Then for calibration:

 ./bin/cpp-example-calibration -w=8 -h=6 -o=camera.yml -op -oe -su image_list.xml 

-su lets you check how images look after distortion. The -w and -h options take "internal angles" which are not the number of squares in a checkerboard pattern, but rather (num-black-squares - 1) * 2 .

Here's how the perspective transform is applied at the end, using Scala and JavaCV:

 import org.bytedeco.javacpp.indexer.FloatRawIndexer import org.bytedeco.javacpp.opencv_core.Mat import org.bytedeco.javacpp.{opencv_core, opencv_imgcodecs, opencv_imgproc} import java.io.File // from the camera_matrix > data part of the yml: val cameraFocal = 1.4656877976320607e+03 val cameraCX = 1920.0/2 val cameraCY = 1080.0/2 val cameraMatrixData = Array[Double]( cameraFocal, 0.0 , cameraCX, 0.0 , cameraFocal, cameraCY, 0.0 , 0.0 , 1.0 ) // from the distortion_coefficients of the yml: val distMatrixData = Array[Double]( -4.016824381742e-01, 4.368842493074e-02, 0.0, 0.0, 1.096412142704e-01 ) def run(in: File, out: File): Unit = { val matOut = new Mat val camMat = new Mat(3, 3, opencv_core.CV_32FC1) val camIdx = camMat.createIndexer[FloatRawIndexer] for (row <- 0 until 3) { for (col <- 0 until 3) { camIdx.put(row, col, cameraMatrixData(row * 3 + col).toFloat) } } val distVec = new Mat(1, 5, opencv_core.CV_32FC1) val distIdx = distVec.createIndexer[FloatRawIndexer] for (col <- 0 until 5) { distIdx.put(0, col, distMatrixData(col).toFloat) } val matIn = opencv_imgcodecs.imread(in.getPath) opencv_imgproc.undistort(matIn, matOut, camMat, distVec) opencv_imgcodecs.imwrite(out.getPath, matOut) } 
0
source

Source: https://habr.com/ru/post/1260238/


All Articles