How to calculate FOV from VRFrameData?

The VREyeParameters field used information about the field of view, but it was deprecated. So now I wonder: is it possible to compute using the view / projection matrices provided by VRFrameData ?

+5
source share
2 answers

The projection matrix describes the mapping from three-dimensional points of the scene to two-dimensional points of the viewport. The projection matrix is ​​converted from view space to clip space. Clip Space Coordinates Homogeneous coordinates . The coordinates in the clip space are converted to normalized device coordinates (NDC) in the range (-1, -1, -1) to (1, 1, 1) by dividing the clip coordinates by the w component.

In a perspective projection, the projection matrix describes the mapping from three-dimensional points in the world, as they are visible from a pinhole camera, to the 2D points of the viewport. The coordinates of the eye space in the truncation of the camera (truncated pyramid) are displayed in a cube (coordinates of the normalized device).

enter image description here

If you want to know the angles of the truncation camera in the gaze, you need to convert the angles of the normalized space of the device (-1, -1, -1), ..., (1, 1, 1) using the matrix of the back projections . To get the Cartesian coordinates , the components X, Y and Z of the result must be divided by the W (4th) component of the result.
glMatrix is a library that provides matrix operations and data types such as mat4 and vec4 :

 projection = mat4.clone( VRFrameData.leftProjectionMatrix ); inverse_prj = mat4.create(); mat4.invert( inverse_prj, projection ); pt_ndc = [-1, -1, -1]; v4_ndc = vec4.fromValues( pt_ndc[0], pt_ndc[1], pt_ndc[2], 1 ); v4_view = vec4.create(); vec4.transformMat4( v4_view, v4_ndc, inverse_prj ); pt_view = [v4_view[0]/v4_view[3], v4_view[1]/v4_view[3], v4_view[2]/v4_view[3]]; 

Coordinates of the type of transformation into world coordinates can be made using the matrix of the reverse lookup .

 view = mat4.clone( VRFrameData.leftViewMatrix ); inverse_view = mat4.create(); mat4.invert( inverse_view, view ); v3_view = vec3.clone( pt_view ); v3_world = vec3.create(); mat4.transformMat4( v3_world, v3_view, inverse_view ); 

Note that the left and right projection matrices are not symmetrical. This means that the line of sight is not in the center of the truncated cone, and they are different for the left and right eyes.


In addition, the matrix of perspective projections is as follows:

 r = right, l = left, b = bottom, t = top, n = near, f = far 2*n/(rl) 0 0 0 0 2*n/(tb) 0 0 (r+l)/(rl) (t+b)/(tb) -(f+n)/(fn) -1 0 0 -2*f*n/(fn) 0 

Where:

 a = w / h ta = tan( fov_y / 2 ); 2 * n / (rl) = 1 / (ta * a) 2 * n / (tb) = 1 / ta 

If the projection is symmetrical , where the line of sight is in the center of the viewing port and the field of view is not shifted, then the matrix can be simplified:

 1/(ta*a) 0 0 0 0 1/ta 0 0 0 0 -(f+n)/(fn) -1 0 0 -2*f*n/(fn) 0 

This means that the field of view can be calculated as follows:

 fov_y = Math.atan( 1/prjMat[5] ) * 2; // prjMat[5] is prjMat[1][1] 

and aspect ratio:

 aspect = prjMat[5] / prjMat[0]; 


Calculation of the field of view field also works if the projection matrix is ​​symmetrical horizontally. This means that if -bottom is top . For projection matrices of two eyes, this should be so.

Further

 z_ndc = 2.0 * depth - 1.0; z_eye = 2.0 * n * f / (f + n - z_ndc * (f - n)); 

substituting the fields of the projection matrix, it is:

 A = prj_mat[2][2] B = prj_mat[3][2] z_eye = B / (A + z_ndc) 

This means that the distance to the nearest plane and to the far plane can be calculated:

 A = prj_mat[10]; // prj_mat[10] is prj_mat[2][2] B = prj_mat[14]; // prj_mat[14] is prj_mat[3][2] near = - B / (A - 1); far = - B / (A + 1); 
+5
source

SOHCAHTOA is pronounced "So", "cah", "toe-ah"

  • SOH → Sine (angle) = Opposite in hypotenuse
  • CAH → Cosine (angle) = Adjacent to the hypotenuse
  • TOA → Tangent (angle) = Opposite over adjacent

enter image description here

Tells us about the connections of different sides of the right triangle with various trigonometry functions

Thus, looking at the image of the truncated cone, we can take the right triangle from the eye to the nearest plane to the top of the truncated cone to calculate the tangent of the field of view, and we can use the tangent arc to turn the tangent back into an angle.

enter image description here

Since we know that the result of the projection matrix takes our global spatial truncation and converts it to clip space and, ultimately, to the normalized device space (-1, -1, -1) - (+1, +1, +1 ) we can get the positions we need by multiplying the corresponding points in the NDC space by the inverse projection matrix

 eye = 0,0,0 centerAtNearPlane = inverseProjectionMatrix * (0,0,-1) topCenterAtNearPlane = inverseProjectionMatrix * (0, 1, -1) 

Then

 opposite = topCenterAtNearPlane.y adjacent = -centerAtNearPlane.z halfFieldOfView = Math.atan2(opposite, adjacent) fieldOfView = halfFieldOfView * 2 

Let the test

 const m4 = twgl.m4; const fovValueElem = document.querySelector("#fovValue"); const resultElem = document.querySelector("#result"); let fov = degToRad(45); function updateFOV() { fovValueElem.textContent = radToDeg(fov).toFixed(1); // get a projection matrix from somewhere (like VR) const projection = getProjectionMatrix(); // now that we have projection matrix recompute the FOV from it const inverseProjection = m4.inverse(projection); const centerAtZNear = m4.transformPoint(inverseProjection, [0, 0, -1]); const topCenterAtZNear = m4.transformPoint(inverseProjection, [0, 1, -1]); const opposite = topCenterAtZNear[1]; const adjacent = -centerAtZNear[2]; const halfFieldOfView = Math.atan2(opposite, adjacent); const fieldOfView = halfFieldOfView * 2; resultElem.textContent = radToDeg(fieldOfView).toFixed(1); } updateFOV(); function getProjectionMatrix() { // doesn't matter. We just want a projection matrix as though // someone else made it for us. const aspect = 2 / 1; // choose some zNear and zFar const zNear = .5; const zFar = 100; return m4.perspective(fov, aspect, zNear, zFar); } function radToDeg(rad) { return rad / Math.PI * 180; } function degToRad(deg) { return deg / 180 * Math.PI; } document.querySelector("input").addEventListener('input', (e) => { fov = degToRad(parseInt(e.target.value)); updateFOV(); }); 
 <script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script> <input id="fov" type="range" min="1" max="179" value="45"><label>fov: <span id="fovValue"></span></label> <div>computed fov: <span id="result"></span></div> 

Note that the center of the truncated cone is assumed to be directly in front of the eye. If this is not the case, you may have to calculate adjacent by calculating the length of the vector from the eye to the center. AtZNear

 const v3 = twgl.v3; ... const adjacent = v3.length(centerAtZNear); 
+2
source

Source: https://habr.com/ru/post/1274952/


All Articles