I use texture DOT3 lighting to match terrain on older iPhones, and I wonder if there is a hidden way to make it right, even if the point of view changes.
In "real" lighting, the normals are transformed using the inverse matrix of the model. Thanks to the texture DOT3-lighting there is no conversion.
With shader mapping, the normal map is in tangent space. When lighting DOT3, the normal map should be in the space for the eyes. This is normal only if you have a fixed viewpoint or your geometry is flat.
Should I just accept this as another limitation of DOT3 lighting? Since he cannot handle the mirror either, I begin to doubt how useful he is.
source
share