Iphone 6 Camera Calibration for OpenCV

I am developing an iOS Augmented Reality application using OpenCV. I'm having trouble creating a camera projection matrix so that the OpenGL overlay displays directly on top of the marker. I feel this is because the iPhone 6 camera was not correctly calibrated in the app. I know there is OpenCV code for calibrating webcams, etc. Using a chessboard, but I can’t find a way to calibrate the built-in iPhone camera.

Is there any way? Or are there known estimates for the iPhone 6? These include: the focal length along x and y, the primary point along x and y, as well as the matrix of distortion coefficients.

Any help would be appreciated.

EDIT:

The highlighted values ​​are as follows (using iPhone 6, camera resolution 1280x720):

fx=1229
cx=360
fy=1153
cy=640

This code provides an accurate estimate of the focal length and primary points for devices currently running on iOS 9.1.

AVCaptureDeviceFormat *format = deviceInput.device.activeFormat;
CMFormatDescriptionRef fDesc = format.formatDescription;
CGSize dim = CMVideoFormatDescriptionGetPresentationDimensions(fDesc, true, true);

float cx = float(dim.width) / 2.0;
float cy = float(dim.height) / 2.0;

float HFOV = format.videoFieldOfView;
float VFOV = ((HFOV)/cx)*cy;

float fx = abs(float(dim.width) / (2 * tan(HFOV / 180 * float(M_PI) / 2)));
float fy = abs(float(dim.height) / (2 * tan(VFOV / 180 * float(M_PI) / 2)));

Note:

I had a problem initializing with this code. I recommend that once the values ​​are initialized and set correctly, to save them in the data file and read this file for the values.

+4
source share
1 answer

In my application without OpenCV AR, I use the field of view (FOV) of the iPhone camera to build the camera’s projection matrix. It works great for displaying the Sun track superimposed over the camera view. I do not know what accuracy you need. Perhaps, knowing only FOV, it will not be enough for you.

iOS API . :

AVCaptureDevice  * camera = ...
AVCaptureDeviceFormat * format = camera.activeFormat;
float fieldOfView = format.videoFieldOfView;

FOV :

typedef double mat4f_t[16]; // 4x4 matrix in column major order    

mat4f_t projection;
createProjectionMatrix(projection,
                       GRAD_TO_RAD(fieldOfView),
                       viewSize.width/viewSize.height,
                       5.0f,
                       1000.0f);

void createProjectionMatrix(
        mat4f_t mout, 
        float fovy,
        float aspect, 
        float zNear,
        float zFar)
{
    float f = 1.0f / tanf(fovy/2.0f);

    mout[0] = f / aspect;
    mout[1] = 0.0f;
    mout[2] = 0.0f;
    mout[3] = 0.0f;

    mout[4] = 0.0f;
    mout[5] = f;
    mout[6] = 0.0f;
    mout[7] = 0.0f;

    mout[8] = 0.0f;
    mout[9] = 0.0f;
    mout[10] = (zFar+zNear) / (zNear-zFar);
    mout[11] = -1.0f;

    mout[12] = 0.0f;
    mout[13] = 0.0f;
    mout[14] = 2 * zFar * zNear /  (zNear-zFar);
    mout[15] = 0.0f;
}
+4

Source: https://habr.com/ru/post/1625623/


All Articles