How to restore view space position set for view space depth value and ndc xy

I am writing a deferred shader and I am trying to pack my gbuffer more accurately. However, I don't seem to be able to calculate the position of the view, given the correct depth of view space

// depth -> (gl_ModelViewMatrix * vec4(pos.xyz, 1)).z; where pos is the model space position // fov -> field of view in radians (0.62831855, 0.47123888) // p -> ndc position, x, y [-1, 1] vec3 getPosition(float depth, vec2 fov, vec2 p) { vec3 pos; pos.x = -depth * tan( HALF_PI - fov.x/2.0 ) * (px); pos.y = -depth * tan( HALF_PI - fov.y/2.0 ) * (py); pos.z = depth; return pos; } 

The calculated position is incorrect. I know this because I still maintain the correct position in gbuffer and test with this.

+7
source share
3 answers

3 solutions to restore the position of the viewing space in perspective projection

The projection matrix describes the mapping from three-dimensional points of the scene to two-dimensional points of the viewing area. It is transformed from the view space (eyes) to the clip space, and the coordinates in the clip space are converted to the normalized device coordinates (NDC) by dividing the clip coordinates by the w component. NDCs are in the range (-1, -1, -1) to (1,1,1).

In perspective projection, the projection matrix describes the mapping from three-dimensional points in the world, as they are visible from a pinhole camera, into two-dimensional points of the viewport.
The coordinates of the eye space in the truncation of the camera (truncated pyramid) are displayed in a cube (normalized coordinates of the device).

Perspective projection matrix:

 r = right, l = left, b = bottom, t = top, n = near, f = far 2*n/(rl) 0 0 0 0 2*n/(tb) 0 0 (r+l)/(rl) (t+b)/(tb) -(f+n)/(fn) -1 0 0 -2*f*n/(fn) 0 

should:

 aspect = w / h tanFov = tan( fov_y * 0.5 ); prjMat[0][0] = 2*n/(rl) = 1.0 / (tanFov * aspect) prjMat[1][1] = 2*n/(tb) = 1.0 / tanFov 

In a perspective projection, the Z-component is calculated by a rational function :

 z_ndc = ( -z_eye * (f+n)/(fn) - 2*f*n/(fn) ) / -z_eye 

Depth ( gl_FragCoord.z and gl_FragDepth ) is calculated as follows:

 z_ndc = clip_space_pos.z / clip_space_pos.w; depth = (((farZ-nearZ) * z_ndc) + nearZ + farZ) / 2.0; 


1. Field of view and aspect ratio

Since the projection matrix is ​​determined by the field of view and aspect ratio, you can restore the position of the viewing area with the field of view and aspect ratio. Provided that this is a symmetrical perspective projection and the normalized coordinates of the device, the depth and the near and far planes are known.

Restore distance Z in the field of view:

 z_ndc = 2.0 * depth - 1.0; z_eye = 2.0 * n * f / (f + n - z_ndc * (f - n)); 

Restore the position of the viewport to the normalized coordinates of the XY device:

 ndc_x, ndc_y = xy normalized device coordinates in range from (-1, -1) to (1, 1): viewPos.x = z_eye * ndc_x * aspect * tanFov; viewPos.y = z_eye * ndc_y * tanFov; viewPos.z = -z_eye; 


2. Projection matrix

Projection parameters determined by the field of view and aspect ratio are stored in the projection matrix. Therefore, the position of the viewport can be restored from the values ​​of the projection matrix from the symmetrical projection of the perspective.

Pay attention to the relationship between the projection matrix, the field of view and the aspect ratio:

 prjMat[0][0] = 2*n/(rl) = 1.0 / (tanFov * aspect); prjMat[1][1] = 2*n/(tb) = 1.0 / tanFov; prjMat[2][2] = -(f+n)/(fn) prjMat[3][2] = -2*f*n/(fn) 

Restore distance Z in the field of view:

 A = prj_mat[2][2]; B = prj_mat[3][2]; z_ndc = 2.0 * depth - 1.0; z_eye = B / (A + z_ndc); 

Restore the position of the viewport to the normalized coordinates of the XY device:

 viewPos.x = z_eye * ndc_x / prjMat[0][0]; viewPos.y = z_eye * ndc_y / prjMat[1][1]; viewPos.z = -z_eye; 


3. Back projection matrix

Of course, the position of the viewport can be restored using the matrix of the rear projection.

 mat4 inversePrjMat = inverse( prjMat ); vec4 viewPosH = inversePrjMat * vec3( ndc_x, ndc_y, 2.0 * depth - 1.0, 1.0 ) vec3 viewPos = viewPos.xyz / viewPos.w; 


See also the answers to the following question:

+7
source

I managed to get it to work in the end. Like his other method from above, I will talk in detail about this so that anyone who sees this will have a solution.

  • Transmission 1: storing depth value in view space in gbuffer
  • To re-create the position (x, y, z) in the second pass:
  • Pass the horizontal and vertical field of view in radians to the shader.
  • Walk the closest distance (close) to the shader. (distance from the camera position to the nearest plane).
  • Imagine a beam from the camera to the position of the fragment. This ray intersects the near plane in a certain position P. We have this position in the space ndc and want to calculate this position in the space of the form.
  • Now we have all the values ​​that we need in the viewing space. We can use the law of similar triangles to find the actual position of the fragment P '

     P = P_ndc * near * tan(fov/2.0f) // computation is the same for x, y // Note that by law of similar triangles, P'.x / depth = P/near P'.xy = P/near * -depth; // -depth because in opengl the camera is staring down the -z axis P'.z = depth; 
+3
source

I wrote a deferred shader and used this code to recalculate the screen position:

 vec3 getFragmentPosition() { vec4 sPos = vec4(gl_TexCoord[0].x, gl_TexCoord[0].y, texture2D(depthTex, gl_TexCoord[0].xy).x, 1.0); sPos.z = 2.0 * sPos.z - 1.0; sPos = invPersp * sPos; return sPos.xyz / sPos.w; } 

where depthTex is the texture depth information, and invPersp is the precalculated inverse perspective matrix. You take the position of the screen fragment and multiply it by the inverse perspective matrix to obtain the model coordinates. Then you divide by w to get uniform coordinates. Multiplying by two and subtracting by one is scaling the depth from [0, 1] (since it is stored in the texture) to [-1, 1].

In addition, depending on which MRT you use, the recalculated result will not be exactly equal to the stored information, as you lose the accuracy of the float.

+1
source

Source: https://habr.com/ru/post/899359/


All Articles