3D matrix perspective transformation

I use the shading shape to create a digital elevation model (DTM) of an image taken with a camera installed on a mobile platform. The algorithm written in Python seems to work quite well, however the output is on a slope and a bit spherical, so I suspect that I need to remove perspective distortion and barring from the DTM. DTM visualization below:

DTM of SfS result Data is available here if anyone is interested in going for it.

The camera is installed with a tilt of 41 degrees and has the following cameras and distortion matrices:

cam_matrix = numpy.matrix([[246.00559,0.00000,169.87374],[0.00000,247.37317,132.21396],[0.00000,0.00000,1.00000]]) distortion_matrix = numpy.matrix([0.04674, -0.11775, -0.00464, -0.00346, 0.00000]) 

How can I apply perspective conversion and remove barrel distortion from this matrix to get a flattened DTM?

I tried using OpenCV, but it does not work, since OpenCv expects an image, and the transforms just move the pixels around, rather than manipulate their value. I also explored Numpy and Scipy, but have not yet come to a conclusion or a solution. I am a little familiar with the theory of these transformations, but mainly worked on two-dimensional versions.

Any ideas?

Thanks.

+3
source share
1 answer

You can use the 4 x 4 transformation matrix, which is invertible and allows bidirectional transformation between the two coordinate systems you want.

If you know the three turns a , b and g , about x , y , z respectively, using the right rule. x0 , y0 , z0 are translations between the sources of two coordinate systems.

The transformation matrix is ​​defined as:

 T = np.array([[ cos(b)*cos(g), (sin(a)*sin(b)*cos(g) + cos(a)*sin(g)), (sin(a)*sin(g) - cos(a)*sin(b)*cos(g)), x0], [-cos(b)*sin(g), (cos(a)*cos(g) - sin(a)*sin(b)*sin(g)), (sin(a)*cos(g) + cos(a)*sin(b)*sin(g)), y0], [ sin(b), -sin(a)*cos(b), cos(a)*cos(b), z0] [ 0, 0, 0, 1]) 

To use it effectively, you must put your points in a two-dimensional array, for example:

 orig = np.array([[x0, x1, ..., xn], [y0, y1, ..., yn], [z0, z1, ..., zn], [ 1, 1, ..., 1]]) 

Then:

 new = T.dot(orig) 

will give you converted points.

+2
source

Source: https://habr.com/ru/post/986164/


All Articles