How to export keyframe bindings and keyframes from a blender for use in OpenGL

**** EDIT2: **

Please avoid specifying performance / optimization issues; this is not the final code at all. It's just me trying to work out the basics :). As soon as I settle everything, I will go for cleaning and optimization.

EDIT: I decided to reformulate the question in much simpler terms to find out if someone could give me a hand with this.

Basically, I export meshes, skeletons, and actions from a blender to the engine that I operate. But I'm wrong in the animation. I can say that the main movements are moving, but there is always an axis of translation or rotation, which is wrong. I think that the problem is most likely not in my engine code (OpenGL-based), but that I either misunderstand some part of the theory that underlies skeletal animation / skinning, or how I export the corresponding matrices from a blender in my exporter script.

I will explain the theory, the engine animation system and my export of the blender script, hoping that someone can catch the error in any of them.

Theory: (I use the column order, because what I use in this engine is called by the OpenGL base)

  • Suppose I have a grid consisting of one vertex v, together with a transformation matrix M, which maps the vertex v from the grid local space to world space. That is, if I were to display a grid without a skeleton, the final position would be gl_Position = ProjectionMatrix * M * v.
  • Now suppose I have a skeleton with one j connection in a bind / rest pose. j is actually a different matrix. The conversion from j of local space to its parent space, which I denote by Bj. if j was part of a joint hierarchy in the skeleton, Bj will take from j spaces to j-1 space (i.e., to its parent space). However, in this example, j is the only shared one, so Bj takes from j spaces to world space, as M does for v.
  • Now suppose I have a set of frames, each with a second transform Cj, which works just like Bj, just for the other, an arbitrary spatial configuration of j. Cj still takes vertices from j space to world space, but j rotates and / or translates and / or scales.

Given the above, in order to copy the vertex v into key frame n. I need:

  • take v from world space to joint space j
  • change j (whereas v remains fixed in j-space and thus is taken into transformation)
  • take v back from modified space j to world space

Thus, the mathematical implementation of the above: v '= Cj * Bj ^ -1 * v . In fact, I have one doubt here. I said that the grid to which v belongs has a transformation M that takes from model space to world space. And I also read in several textbooks that it should be transformed from model space to joint space. But I also said in 1 that v needs to be transformed from the world into a joint space. So I'm not sure what I need to do v '= Cj * Bj ^ -1 * v or v' = Cj * Bj ^ -1 * M * v . Right now, my implementation is multiplying v 'by M, not v. But I tried to change this, and it just twists things differently because something else is wrong there.

  • Finally, if we wanted to connect the vertex to the joint j1, which, in turn, is a child of the joint j0, Bj1 will be Bj0 * Bj1, and Cj1 will be Cj0 * Cj1. But since skinning is defined as v '= Cj * Bj ^ -1 * v , Bj1 ^ -1 will be the inverse concatenation of the inverses that make up the original product. That is, v '= Cj0 * Cj1 * Bj1 ^ -1 * Bj0 ^ -1 * v

Now for the implementation (Blender side):

Suppose the following mesh consists of 1 cube, the vertices of which are attached to a single joint in a monosyllabic skeleton:

enter image description here

Assume also that a 60-frame animation with 3 key frames at a speed of 60 frames per second. The animation is essentially this:

  • keyframe 0: joint in a binding / resting position (the way you see it in the image).
  • keyframe 30: the joint translates (+ z in the blender) a certain amount and at the same time rotates pi / 4 clockwise.
  • keyframe 59: the connection returns to the same configuration as in keyframe 0.

My first source of confusion on the blender side is its coordinate system (unlike OpenGL by default) and the various matrices available via python api.

Right now, this is my export script doing the translation of the blender coordinate system into the standard OpenGL system:

# World transform: Blender -> OpenGL worldTransform = Matrix().Identity(4) worldTransform *= Matrix.Scale(-1, 4, (0,0,1)) worldTransform *= Matrix.Rotation(radians(90), 4, "X") # Mesh (local) transform matrix file.write('Mesh Transform:\n') localTransform = mesh.matrix_local.copy() localTransform = worldTransform * localTransform for col in localTransform.col: file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3])) file.write('\n') 

So, if you do this, my โ€œworldโ€ matrix will be basically the act of changing the blender's coordinate system to GL by default with + y up, + x on the right and -z in the viewing volume. Then I also succeed (in the sense that this was done by the time we got to the engine, and not in the sense of a message or pre in terms of the order of multiplication of the matrix) of the mesh matrix M, so I do not need to multiply it once again for a call to call in the engine.

On the possible matrices for extracting Blender joints (bones in Blender language), I do the following:

  • For joint binding purposes:

     def DFSJointTraversal(file, skeleton, jointList): for joint in jointList: bindPoseJoint = skeleton.data.bones[joint.name] bindPoseTransform = bindPoseJoint.matrix_local.inverted() file.write('Joint ' + joint.name + ' Transform {\n') translationV = bindPoseTransform.to_translation() rotationQ = bindPoseTransform.to_3x3().to_quaternion() scaleV = bindPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write( {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) DFSJointTraversal(file, skeleton, joint.children) file.write('}\n') 

Please note that I am actually grabbing the opposite of what, in my opinion, is a posture to Bj conversion. This means that I do not need to invert it in the engine. Also note that I went for matrix_local, considering it to be Bj. Another option is a simple โ€œmatrixโ€, which, as far as I can tell, is the same as not homogeneous.

  • For shared current / key frames:

     for kfIndex in keyframes: bpy.context.scene.frame_set(kfIndex) file.write('keyframe: {:d}\n'.format(int(kfIndex))) for i in range(0, len(skeleton.data.bones)): file.write('joint: {:d}\n'.format(i)) currentPoseJoint = skeleton.pose.bones[i] currentPoseTransform = currentPoseJoint.matrix translationV = currentPoseTransform.to_translation() rotationQ = currentPoseTransform.to_3x3().to_quaternion() scaleV = currentPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write( {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) file.write('\n') 

Notice that here I use skeleton.pose.bones instead of data.bones and that I have a choice of 3 matrices: matrix, matrix_basis and matrix_channel. From the descriptions in the python API docs, I'm not sure which one I should choose, although I think this is a simple matrix. Also note that I am not inverting the matrix in this case.

Implementation (Engine / OpenGL side):

My animation subsystem performs the following actions for each update (I exclude parts of the update cycle, where he found out which objects need updating and the time is hard-coded here for simplicity):

 static double time = 0; time = fmod((time + elapsedTime),1.); uint16_t LERPKeyframeNumber = 60 * time; uint16_t lkeyframeNumber = 0; uint16_t lkeyframeIndex = 0; uint16_t rkeyframeNumber = 0; uint16_t rkeyframeIndex = 0; for (int i = 0; i < aClip.keyframesCount; i++) { uint16_t keyframeNumber = aClip.keyframes[i].number; if (keyframeNumber <= LERPKeyframeNumber) { lkeyframeIndex = i; lkeyframeNumber = keyframeNumber; } else { rkeyframeIndex = i; rkeyframeNumber = keyframeNumber; break; } } double lTime = lkeyframeNumber / 60.; double rTime = rkeyframeNumber / 60.; double blendFactor = (time - lTime) / (rTime - lTime); GLKMatrix4 bindPosePalette[aSkeleton.jointsCount]; GLKMatrix4 currentPosePalette[aSkeleton.jointsCount]; for (int i = 0; i < aSkeleton.jointsCount; i++) { F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.joints[i]; F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.joints[i]; GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor); GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor); GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor); GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation); currentTransform = GLKMatrix4TranslateWithVector3(currentTransform, LERPTranslation); currentTransform = GLKMatrix4ScaleWithVector3(currentTransform, LERPScaling); GLKMatrix4 inverseBindTransform = GLKMatrix4MakeWithQuaternion(aSkeleton.joints[i].inverseBindTransform.q); inverseBindTransform = GLKMatrix4TranslateWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.t); inverseBindTransform = GLKMatrix4ScaleWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.s); if (aSkeleton.joints[i].parentIndex == -1) { bindPosePalette[i] = inverseBindTransform; currentPosePalette[i] = currentTransform; } else { bindPosePalette[i] = GLKMatrix4Multiply(inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]); currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform); } aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]); } 

Finally, this is my vertex shader:

 #version 100 uniform mat4 modelMatrix; uniform mat3 normalMatrix; uniform mat4 projectionMatrix; uniform mat4 skinningPalette[6]; uniform lowp float skinningEnabled; attribute vec4 position; attribute vec3 normal; attribute vec2 tCoordinates; attribute vec4 jointsWeights; attribute vec4 jointsIndices; varying highp vec2 tCoordinatesVarying; varying highp float lIntensity; void main() { tCoordinatesVarying = tCoordinates; vec4 skinnedVertexPosition = vec4(0.); for (int i = 0; i < 4; i++) { skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position; } vec4 skinnedNormal = vec4(0.); for (int i = 0; i < 4; i++) { skinnedNormal += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * vec4(normal, 0.); } vec4 finalPosition = mix(position, skinnedVertexPosition, skinningEnabled); vec4 finalNormal = mix(vec4(normal, 0.), skinnedNormal, skinningEnabled); vec3 eyeNormal = normalize(normalMatrix * finalNormal.xyz); vec3 lightPosition = vec3(0., 0., 2.); lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition))); gl_Position = projectionMatrix * modelMatrix * finalPosition; } 

As a result, the animation does not display correctly in terms of orientation. That is, instead of bouncing up and down, it is pulled in and out (by what I think is the Z axis according to my transformation in the export clip). Rotation angle counterclockwise counterclockwise.

If I try with more than one joint, then it is almost as if the second joint rotated in it its own different coordinate space and did not follow its 100% parental transformation. I believe that from my animation subsystem, which I assume, in turn, follows a theory that I explained for the case of more than one joint.

Any thoughts?

+4
source share

Source: https://habr.com/ru/post/1447137/


All Articles