The fastest and most accurate way to match a distorted / converted image to a base image?

I'm trying to take a picture taken with a scanner (or perhaps a mobile phone’s camera) with different quality and distortion, and rebuild it with a basic image (the one that was created through Photoshop, before it was printed and scanned) to be as possible nearer.

The image has four thick corner points in each of the corners, which I used by the primitive method to find four points, and then use the perspective transform on the scanned image. However, my algorithm is completely brute force and breaks down a lot.

I tried using cvGoodFeaturesToTrack (), but I can't think of an exact way to make sure that the four points for calibration are accurate under any circumstances. I was thinking about using pattern matching, but it looks like it will work reliably under various distortions. I see many methods for performing specific tasks, such as finding contours, key points, lines, etc., but it does not indicate what to actually do with them.

Is there a better way that I just don't see?

Thanks!

+4
source share
2 answers

The classic approach is binarization and blob analysis: find the pixels darker than the threshold and group them when they touch (analysis of related components). Keep groups that have the exact shape (good roundness) and area in the expected range. Use the center of gravity. This should be accurate enough for your guidance.

You might want to reduce the false detection of corner points due to close image functions. An option for better recognition is to use rings instead of discs and look for drops with a hole.

+1
source

If your distortion can be described as homography, you can use the "ESM" algorithm (effective second-order minimization available in the CVD library). http://www.edwardrosten.com/cvd/cvd/html/group__gEsm.html

If your distortion includes some lens distortion, you can use DIC (digital image correlation).

0
source

Source: https://habr.com/ru/post/1392356/


All Articles