Calculating camera position and orientation in opencv

So, imagine that the camera is looking at the screen of your computer. What I'm trying to do is determine how far this camera is rotated, how far it is on the screen, and where it is relative to the center of the screen. In short, rotation and translation matrices.

I use opencv for this and follow the example of their camera calibration in order to accomplish this task with a checkerboard pattern and a frame from a webcam. I would like to do with any generic images, namely with a screen cover and a frame from a webcam.

I tried using function detection algorithms to get a list of cue points from both images, and then map these cue points to BFMatcher, but ran into problems. In particular, SIFT does not match key points correctly, and SURF does not find key points correctly in a scaled image.

Is there an easier solution to this problem? I feel that this would be a common thing that people did, but did not find much discussion about it on the Internet.

Thanks!!

+6
source share
1 answer

Finding natural planar markers is a common task in computer vision, but in your case you have a screen that depends on what you visualize on the screen, it could be your desktop, your browser, movie, ...

Thus, you cannot use conventional marker detection methods; you should try to recognize the shape. The idea is trying to filter particles on a rectangular template with the same dimensions (across different scales) of your screen, using the detection of the first edge.

The particle filter will place the template in the frame area. After that, you will know the position. For orientation, you will need to calculate the homography, and for this you will need 4 points in the β€œmarkers”, so you can use Direct Linear Transform (cv :: findHomography () does this for you). So your four points can be four corners. This is just an idea, good luck!

+3
source

Source: https://habr.com/ru/post/918372/


All Articles