I am looking for an effective way to calculate the position of an object on a surface based on an image taken from a certain point of view.
Let me explain a little further. There is an object on a rectangular flat surface. I have a photo taken with this setup, with a camera located in one of the corners of the surface area with a fairly low angle. Thus, in the picture, I see a somewhat distorted, diamond-shaped view of the surface and somewhere on it an object.
After some image processing, I have the coordinates of the object in the picture, but now I need to calculate the actual position of the object on the surface.
So, I know that the center of the object is in pixel coordinates (x / y) in the image, and I know the coordinates of 4 anchor points that represent the corners of the area.
How can I now calculate the "real" position of an object most efficiently (x and y coordinates on the surface)?
Any input is welcome, because I worked so much on it, I can’t even think directly.
Regards, Tom
source
share