Animation of facial wrinkles animations in OpenGL (C ++)

I am trying to implement Morph Target animation in OpenGL using Facial Blendshapes, but after this tutorial. The vertex shader for the animation looks something like this:

#version 400 core in vec3 vNeutral; in vec3 vSmile_L; in vec3 nNeutral; in vec3 nSmile_L; in vec3 vSmile_R; in vec3 nSmile_R; uniform float left; uniform float right; uniform float top; uniform float bottom; uniform float near; uniform float far; uniform vec3 cameraPosition; uniform vec3 lookAtPosition; uniform vec3 upVector; uniform vec4 lightPosition; out vec3 lPos; out vec3 vPos; out vec3 vNorm; uniform vec3 pos; uniform vec3 size; uniform mat4 quaternion; uniform float smile_w; void main(){ //float smile_l_w = 0.9; float neutral_w = 1 - 2 * smile_w; clamp(neutral_w, 0.0, 1.0); vec3 vPosition = neutral_w * vNeutral + smile_w * vSmile_L + smile_w * vSmile_R; vec3 vNormal = neutral_w * nNeutral + smile_w * nSmile_L + smile_w * nSmile_R; //vec3 vPosition = vNeutral + (vSmile_L - vNeutral) * smile_w; //vec3 vNormal = nNeutral + (nSmile_L - nNeutral) * smile_w; normalize(vPosition); normalize(vNormal); mat4 translate = mat4(1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, pos.x, pos.y, pos.z, 1.0); mat4 scale = mat4(size.x, 0.0, 0.0, 0.0, 0.0, size.y, 0.0, 0.0, 0.0, 0.0, size.z, 0.0, 0.0, 0.0, 0.0, 1.0); mat4 model = translate * scale * quaternion; vec3 n = normalize(cameraPosition - lookAtPosition); vec3 u = normalize(cross(upVector, n)); vec3 v = cross(n, u); mat4 view=mat4(ux,vx,nx,0, uy,vy,ny,0, uz,vz,nz,0, dot(-u,cameraPosition),dot(-v,cameraPosition),dot(-n,cameraPosition),1); mat4 modelView = view * model; float p11=((2.0*near)/(right-left)); float p31=((right+left)/(right-left)); float p22=((2.0*near)/(top-bottom)); float p32=((top+bottom)/(top-bottom)); float p33=-((far+near)/(far-near)); float p43=-((2.0*far*near)/(far-near)); mat4 projection = mat4(p11, 0, 0, 0, 0, p22, 0, 0, p31, p32, p33, -1, 0, 0, p43, 0); //lighting calculation vec4 vertexInEye = modelView * vec4(vPosition, 1.0); vec4 lightInEye = view * lightPosition; vec4 normalInEye = normalize(modelView * vec4(vNormal, 0.0)); lPos = lightInEye.xyz; vPos = vertexInEye.xyz; vNorm = normalInEye.xyz; gl_Position = projection * modelView * vec4(vPosition, 1.0); } 

Although the target animation algorithm works, I get the missing faces in the final computed mix form. The animation is somewhat similar to the next gif.

Morph Target Facial Animation

Folds of blendshapes are exported from markerless animation software known as FaceShift.

But also, the algorithm works fine on a normal cuboid with its twisted mixing form created in Blender:

Cube Twist Morph Target Animation

Could this be something wrong with the blendshapes that I use to animate my faces? Or am I doing something wrong in the vertex shader?

-------------------------------------------- ------ ------------ Update ------------------------------- ------ ---------------------

So, as suggested, I made the changes necessary for the vertex shader and made a new animation, and yet I get the same results.

Here's the updated vertex shader code:

 #version 400 core in vec3 vNeutral; in vec3 vSmile_L; in vec3 nNeutral; in vec3 nSmile_L; in vec3 vSmile_R; in vec3 nSmile_R; uniform float left; uniform float right; uniform float top; uniform float bottom; uniform float near; uniform float far; uniform vec3 cameraPosition; uniform vec3 lookAtPosition; uniform vec3 upVector; uniform vec4 lightPosition; out vec3 lPos; out vec3 vPos; out vec3 vNorm; uniform vec3 pos; uniform vec3 size; uniform mat4 quaternion; uniform float smile_w; void main(){ float neutral_w = 1.0 - smile_w; float neutral_f = clamp(neutral_w, 0.0, 1.0); vec3 vPosition = neutral_f * vNeutral + smile_w/2 * vSmile_L + smile_w/2 * vSmile_R; vec3 vNormal = neutral_f * nNeutral + smile_w/2 * nSmile_L + smile_w/2 * nSmile_R; mat4 translate = mat4(1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, pos.x, pos.y, pos.z, 1.0); mat4 scale = mat4(size.x, 0.0, 0.0, 0.0, 0.0, size.y, 0.0, 0.0, 0.0, 0.0, size.z, 0.0, 0.0, 0.0, 0.0, 1.0); mat4 model = translate * scale * quaternion; vec3 n = normalize(cameraPosition - lookAtPosition); vec3 u = normalize(cross(upVector, n)); vec3 v = cross(n, u); mat4 view=mat4(ux,vx,nx,0, uy,vy,ny,0, uz,vz,nz,0, dot(-u,cameraPosition),dot(-v,cameraPosition),dot(-n,cameraPosition),1); mat4 modelView = view * model; float p11=((2.0*near)/(right-left)); float p31=((right+left)/(right-left)); float p22=((2.0*near)/(top-bottom)); float p32=((top+bottom)/(top-bottom)); float p33=-((far+near)/(far-near)); float p43=-((2.0*far*near)/(far-near)); mat4 projection = mat4(p11, 0, 0, 0, 0, p22, 0, 0, p31, p32, p33, -1, 0, 0, p43, 0); //lighting calculation vec4 vertexInEye = modelView * vec4(vPosition, 1.0); vec4 lightInEye = view * lightPosition; vec4 normalInEye = normalize(modelView * vec4(vNormal, 0.0)); lPos = lightInEye.xyz; vPos = vertexInEye.xyz; vNorm = normalInEye.xyz; gl_Position = projection * modelView * vec4(vPosition, 1.0); } 

Also, my fragment shader looks something like this. (I just added new material settings compared to the previous ones)

 #version 400 core uniform vec4 lightColor; uniform vec4 diffuseColor; in vec3 lPos; in vec3 vPos; in vec3 vNorm; void main(){ //copper like material light settings vec4 ambient = vec4(0.19125, 0.0735, 0.0225, 1.0); vec4 diff = vec4(0.7038, 0.27048, 0.0828, 1.0); vec4 spec = vec4(0.256777, 0.137622, 0.086014, 1.0); vec3 L = normalize (lPos - vPos); vec3 N = normalize (vNorm); vec3 Emissive = normalize(-vPos); vec3 R = reflect(-L, N); float dotProd = max(dot(R, Emissive), 0.0); vec4 specColor = lightColor*spec*pow(dotProd,0.1 * 128); vec4 diffuse = lightColor * diff * (dot(N, L)); gl_FragColor = ambient + diffuse + specColor; } 

And finally, the animation I received from the code update:

Updated Morph animation

As you can see, I still get the missing triangles / faces in the animation of the target morph. Any suggestions and comments on this subject would be really helpful. Thanks again.:)

Update:

So, as I suggested, I flipped the normals if dot(vSmile_R, nSmile_R) < 0 , and I got the following image result.

Also, instead of getting normals from obj files, I tried to compute my own (face and vertex normals), and yet I got the same result.

enter image description here

+6
source share
2 answers

I already have a very similar problem . As you eventually noticed, your problem most likely lies in the grid itself. In my case, it was an inconsistent grid triangulation. Using Triangluate Modifier in Blender solved the problem for me. Maybe you should try it too.

+1
source

Not an attempt to answer, I just need more formatting than is available for comment.

I can’t determine what data was actually exported from Fasceshift and how it was added to custom ADT applications; my crystal ball is currently busy predicting the results of the FIFA Wold Cup.

But, as a rule, linear morphing is a very simple thing:

There is one data vector β€œI” for the strong grid i and a vector β€œF” of equal size for the position data of the strong grid f ; their counting and ordering must coincide so that the tessellation remains intact.

Given j ∈ [0, count), the corresponding vectors initial_ = i [j], final_ = F [j] and the morphing factor λ ∈ [0,1] the jth (zero) current vector current_ (λ) is given by the formula

current_ (Ξ») = initial_ + Ξ». (final_ - initial_) = (1 - Ξ»). initial_ + Ξ». Final _.


From this point of view, this

 vec3 vPosition = neutral_w * vNeutral + smile_w/2 * vSmile_L + smile_w/2 * vSmile_R; 

looks dubious at best.

As I said, my crystal ball is currently not functioning, but naming will mean that, given the standard OpenGL reference frame,

vSmile_L = vSmile_R * (-1,1,1),

this is β€œ*”, denoting component multiplication, and this, in turn, would mean the abolition of the x-component of morphing by the above addition.

But, apparently, the face does not degenerate into a plane (line from the projection pov), so the meaning of these attributes is unclear.

That is why I want to look at effective data, as indicated in the comments.


Another thing is not related to the effect under consideration, but to the shading algorithm.

As stated in the answer to this

Can Shader OpenGL compilers optimize expressions in uniforms? ,

a shader optimizer can well optimize purely uniform expressions, such as M / V / P calculations done with

 uniform float left; uniform float right; uniform float top; uniform float bottom; uniform float near; uniform float far; uniform vec3 cameraPosition; uniform vec3 lookAtPosition; uniform vec3 upVector; /* */ uniform vec3 pos; uniform vec3 size; uniform mat4 quaternion; 

but I find it very optimistic to rely on such alleged optimizations.

if this is not optimized, accordingly this means that this is done once for each frame to the vertex, therefore for a human face with LOD 1000 vertices and 60 Hz, which will be executed by the GPU 60,000 times per second, and not once with the CPU .

No modern processor will give up on the soul if these calculations are inserted once on her shoulders, so that the passage of the general trinity of M / V / P-matrices, since the uniform seems suitable, instead of constructing these matrices in a shader.

To reuse code from shaders, glm provides a very glsl-ish way to do GL-related maths in C ++.

+2
source

Source: https://habr.com/ru/post/970723/


All Articles