Graphic rendering in the field of

I am trying to write optimized code that displays a 3D scene using OpenGL on a sphere, and then displays the expanded sphere on the screen, i.e. Creates a flat map of a purely reflective sphere. In mathematical terms, I would like to create a projection map where the x axis is the polar angle and the y axis is the azimuth.

I try to do this by placing the camera in the center of the probe probe and taking flat pictures around to zoom in on spherical squares with flat plates of a truncated cone. Then I can use this as a texture to apply to a distorted planar patch.

I think this is a rather tedious approach. I wonder if there is a way to do this when using shaders or some GPU methods.

thank

S.

+3
source share
3 answers

I can give you two solutions.

The first is to make a standard render for the texture, but with the file attached to it as a destination buffer. If your equipment is recent enough, this can be done in one go. This will cover all the necessary math in HW for you, but redistributing cubmap data is not ideal (quite a lot of distortion if angles). In most cases this should be enough.

After that, you render the square on the screen, and in the shader you map your UV coordinates to the xyz vectors using the staightforwad spherical mapping. HW will calculate for you which side of the cubic map to take at which UV.

, HW: . , 6 . , UV.

0

, , , , , , . .

0

, OpenGL . NeHe , .

0

Source: https://habr.com/ru/post/1757491/


All Articles