Camera calibration with a limited set of images in OpenCV and C ++

Do you have any ideas or recommendations on camera calibration when the number of samples is limited and within a small image area?

See below for more information:

I am working on a project to help people with disabilities use the computer with their eyes. There is something that causes me some problems related to my inexperience with OpenCV.

The camera is mounted on the head, the bulge is not bad, but the eyeball is convex and moves to rotate. I plan to “smooth” the eye so that it moves along the plane. The obvious choice is to calibrate the camera to eliminate radial distortion.

During the calibration process, the user looks at the corners of the grid on the screen. Pupil moments are stored in the matrix for each position during calibration. Thus, I have an image with points corresponding to a series of eye zones when viewing the corners of the grid on the screen.

I can draw filled polygons connecting groups of four points and create a checkerboard pattern, or I can save each position as a point and use a symmetrical circle pattern for calibration.

The problem is that the camera is static and the position of the eyes does not change, so I am limited as to how many images I can generate, since the range of behavior is not so good.

I'm thinking of dividing the grid into smaller checkerboard patterns, but they will all be in the same small region, so I'm not sure how useful this is.

Thanks!

+4
source share
1 answer

What you're talking about is actually not like calibrating the camera - it's calibrating your eye tracking setting.

When you calibrate the camera in OpenCV, you are trying to eliminate radial and tangential distortion, so it seems intuitive to use this process to “smooth” a round object. However, the radial distortion created by a spherical lens is not what you are dealing with. You are worried about how objects on a spherical object are projected into your image.

Admittedly, the models will look very similar, but the fact is that you should not touch the calibration of your camera (which you must do offline) while calibrating your setup for the subject. The fact that your “range of positions” is limited is inherent in your problem and cannot be changed by image processing. The eye that you shoot fills so much of the field of view of your camera.

Personally, I simply record the position of the pupil at 9 evenly distributed points on the screen and compare the coordinates of the screen with the coordinates of the image of the second-order polynomial of the student. This boils down to accepting Taylor's first term of spherical projection, which is probably good enough if the eye movements are large. You can then check the predicted movements against the second calibration with 16 instead of 9 points.

I assume you can find a book on the subject for more information.

+1
source

Source: https://habr.com/ru/post/1482868/


All Articles