Why are you using camera space instead of model space for normals?

I study OpenGL graphics and fall into the shadows. The textbooks I'm reading tell me to transform my normals and light vector into camera space. Why is this? Why can't you just hold coordinates in model space?

The next question: how to handle model transformations. I can not find the final answer. I have this code:

vec3 normCamSpace = normalize(mat3(V) * normal);" vec3 dirToLight = (V*vec4(lightPos, 0.0)).xyz;" float cosTheta = clamp(dot(normCamSpace, dirToLight),0,1);" 

V - view matrix or camera matrix. I'm not sure how to move or edit the light when the model changes position, rotation, and scale.

+6
source share
2 answers

The main reason is that usually your bright positions will not be provided in the model space, but in the global space. However, for lighting to work efficiently, all calculations must be performed in a common space. In your normal transformation chain, the local coordinates of the model are transformed by the model view matrix directly into the view space

 p_view = MV · p_local 

Since you usually only have one model matrix, it would be cumbersome to divide this stem into something like

 p_world = M · p_local p_view = V · p_world 

To do this, you need MV to be separate.

Since projection transformation traditionally takes place as a separate step, the viewing space is the natural “common bottom ground” on which the illumination calculation is calculated. It just includes a conversion that converts your light positions from the world to view the space, and since light positions are not very complicated, this is done on the CPU and pre-reformed light positions specified as a shader.

Please note: there is nothing stopping you from doing light calculations in world space or modeling local space. It just correctly converts light positions.

+7
source

I study OpenGL graphics and fall into the shadows. The textbooks I'm reading tell me to transform my normals and light vector into camera space. Why is this? Why can't you just hold coordinates in model space?

In fact, if you are writing a shader, you can use any required coordinate space. IMO, which calculates lighting in world space, feels more "natural", but this is a matter of taste.

However, there are two small details:

  • You cannot "naturally" calculate lighting in the space of objects if your object is a skeletal grid (character model animated by bones). Such a model would require world space or viewing space. If your object can only be translated and rotated (only affine transformations), then the lighting can be easily calculated in the model / object space. I think some game engines really worked that way.
  • If you use camera space, you can discard one subtraction when calculating specular highlights. Blinn / phong mirror models require a vector (or out) of the eye to calculate the mirror factor. The camera’s camera vector from the eye to the point is equal to the position point. This is a very small optimization, and probably not worth the effort.
0
source

Source: https://habr.com/ru/post/948829/


All Articles