Staple multiple cameras

I have a project for stitching images from multiple cameras, but I think I have a bottleneck. I have some questions on this issue.

I want to try to install them on a vehicle in the future, and that means the relative positions and orientations of the cameras are FIXED.

In addition, since I use several cameras and try to stitch images from them using HOMOGRAPHY, I put the cameras as close as possible to errors (due to the fact that the cameraโ€™s foci are not in the same position and itโ€™s impossible, since the cameras occupy a certain space .) can be reduced.

Here is a short experimental video. http://www.youtube.com/watch?v=JEQJZQq3RTY

The result of the line is very terrible, as shown in the figure ... Despite the fact that the scene shot by the cameras is static, the homography is still changing.

The following link is the code I have made so far, and code1.png and code2.png are images that show part of my code in Stitching_refind.cpp.

https://docs.google.com/folder/d/0B2r9FmkcbNwAbHdtVEVkSW1SQW8/edit?pli=1

A few days ago I changed some content of the code, for example, to do steps 2, 3 and 4 (please check the 2 png images mentioned above) ONLY ONCE.


So my questions are:

1. Is it possible to identify overlapping areas before calculating functions? I do not want to calculate functions on all images, as this will lead to increased computational time and inconsistencies. I wonder if JUST is possible for a computer to overlap two adjacent images?

2. What can I do to make the resulting homography more accurate? Some people talked about CAMERA CALIBRATION and tried a different matching method. I'm still new to Computer Vision ... I tried to learn some materials about camera calibration, but I still don't know what this is for.

About 2 months ago I asked a similar question: Having some difficulty stitching images using OpenCV

where one of the defendants Chris said:

It looks like you are going about it wisely, but if you have access to both cameras and they will respect each other, then calibrate offline and just apply online conversion will make your application more efficient.

What does offline calibration mean? and what does it help?

Thanks for any tips and help.

+6
source share
1 answer

As Chris wrote:

However, your points are not restricted to a specific plane as they are imaging a 3D scene. If you wanted to calibrate offline, you could image a chessboard with both cameras, and the detected corners could be used in this function. 

Off-line calibration means that you can easily identify some calibration patterns. Then calculate the transformation matrix. After this calibration, you apply this (previously calculated) matrix to the resulting images, it should work for you.

+5
source

Source: https://habr.com/ru/post/920547/


All Articles