I managed to implement the lookAt matrix for the camera successfully, as there are many sources describing it.
I am trying to translate this from the camera while looking at the lookAt model. I cannot get it to work, and I assume that I have a slight misunderstanding about how the matrix is constructed. I believe that I do not need to change the translation of the model to look at the point, since it should not remain unchanged.
First of all, here is the corresponding code. The lookAtRadians function should look at the point specified in the same frame of the link as its translation (i.e., At-position = direction). However, there are some problems that I will show with screenshots. It does not verify that direction.y () is 1.0f or -1.0f, but that is trivial.
void TransformMatrix3D::lookAtRadians(float atX, float atY, float atZ, float toZRadians) { Vector3D direction(atX - x(), atY - y(), atZ - z()); direction.normalize(); Vector3D up(0.0f, 1.0f, 0.0f); Vector3D right(direction.crossProduct(up)); right.normalize(); up = direction.crossProduct(right); mMatrix[0] = right.x(); mMatrix[4] = right.y(); mMatrix[8] = right.z(); mMatrix[1] = up.x(); mMatrix[5] = up.y(); mMatrix[9] = up.z(); mMatrix[2] = direction.x(); mMatrix[6] = direction.y(); mMatrix[10] = direction.z(); }
Here is a cross product and normalize the functions if they are incorrect.
Vector3D Vector3D::crossProduct(const Vector3D& rightVector) const { const float NEW_X(y() * rightVector.z() - z() * rightVector.y()); const float NEW_Y(z() * rightVector.x() - x() * rightVector.z()); const float NEW_Z(x() * rightVector.y() - y() * rightVector.x()); return Vector3D(NEW_X, NEW_Y, NEW_Z); } void Vector3D::normalize() { float length(x() * x() + y() * y() + z() * z()); if(fabs(length) == 1.0f) return; length = 1.0f / sqrt(length); moveTo(x() * length, y() * length, z() * length); }
Here are some screenshots to describe my problem. The white sphere indicates the viewpoint.
I created a cube translated -10.0f down the Z axis (this will set mMatrix[12] , mMatrix[13] and mMatrix[14] to 0.0f, 0.0f, -10.0f respectively. The rest of the matrix is personal. I checked, that it is), which I will use to demonstrate the problems.
Screenshot: No rotation
If I move the LookAt point along only the X and Y axes, it looks like lookAt is working correctly.
Screenshot: X axis (rotation Y)
Screenshot: Y axis (X rotation)
However, when I combine two (that is, move the lookAt point so that both X and Y are not equal to 0.0f), some rotation Z is applied, which should not happen, since UP x DIRECTION should always lead to the RIGHT. y () is 0.0f. Z rotation will be applied using toZRadians (which is not yet implemented).
Screenshot: Z rotation added
I also found that if I then move the viewpoint down the Y axis, the model still follows the lookAt point, but it actually rotates around the global X axis (or equivalent to it, at least).
Screenshot: Global rotation X
Now that the lookAt point moves to -Z, the model has the correct rotation Y, but the rotation X is inverted. I checked my vectors at this point, and I found that UP.y () is negative, which should not be possible (it can be 0.0f, but not negative), since DIRECTION and RIGHT should always blow the same way (i.e. clockwise from RIGHT DIRECTION). The only way UP.y () can be negative is if RIGHT is actually LEFT.
Screenshot: Inverted rotation X
The model still rotates around the global X axis, as was done when the lookAt point was + Z.
Screenshot: Global rotation X (lookAt -Z)
As I mentioned, this is probably a misunderstanding of how matrices work, but it could be something else. I looked around for several days and I can only find camera-based features. Any sources explaining the axes contained in the matrix lead to the code presented in this post.