Should I use a vertex shader in this situation?

I would like to create a motion blur effect by rendering and additively mixing moving objects at several points on their path over the frame.

I thought that the calculation to determine the drawing location could be done in the vertex shader. It seems to me, however, that I may need to use a geometric shader if I want to avoid going through the geometry for each rendering.

What is my best course of action? I decide between:

  • Manually collecting vertex data for each subframe and transferring it to the GPU every time (while I will not have a vertex program)
  • Send the geometry along with the speed values. I can calculate the intermediate position in the vertex shader, although I'm not sure how to indicate that a certain speed value is assigned to certain groups of primitives. I will need to send the same vertices once for each subframe rendering, because the vertex shader cannot create new vertices.
  • Use the geometric shader to create all the geometry for all the subframes. I should be able to get all the subframes without transferring data back and forth during the entire rendering process.

The balance I want to strike at is that I want at least redundant data transfer, supporting as much hardware as possible. It seems like I should use the Vertex buffer object to store the geometry data and just pass a few mouthpieces to send speed data to the vertex shader for each rendering. It works? Also, the VBO buffer is constant, so for the best performance, I have to go in and change the geometry data as needed, right?

Another potential problem that I don’t know with is that I want to accurately make my intermediate positions by interpolating translation and rotation, which rigid objects intersect in the frame, and not just interpolate only the resulting vertex positions. The difference here is that a rotating object will leave a curved strip.

Is there any way to prevent a call to a separate dynamic hard object? Maybe I could use a common vertex attribute to send my speed? This would be somewhat redundant because I could have an object with 100 vertices with the same speed data, but at least my shader can get the stream of this data this way.

It seems to me that there can not be too much to achieve vertex transformations on the GPU: I would need to transfer the velocity vector, the angular velocity scalar and the center of the mass vector as the vertex of the attributes. Sounds like a big waste of bandwidth. However, I can use this data for a potentially large number of "samples" (subframe rendering).

I worked for a very long time using OpenGL Immediate Mode, but this time I want to do everything right.

UPDATE. See the extended discussion of comments for the direction that has been followed. Now I’m sure that a few samples will not lead to a good result because of the “strobe light effect”: at some speeds, I will need blur for performance reasons. In this case, I need to accumulate blurry subframes; rendering subframes and then blurring will still leave artifacts.

+4
source share
2 answers

I would like to create a motion blur effect by rendering and additively mixing moving objects at several points on their path over the frame.

This is definitely one way to make motion blur. But nowadays, image blur is implemented by the post-blur filter of vector blur in the fragment shader. See http://www.blender.org/development/release-logs/blender-242/vector-blur/ for an explanation of how this works. For real time, the process must be reproduced using post-processing shaders.

+3
source
  • The software creates subframes. Consider this case "basic."
  • Vertex shader - you can do this, but don’t try to send the geometry speed, just send the vertex speed:

    Transfer the frame to VBO by calling glVertexAttrib to enable the current speed and acceleration of each vertex. Display VBO again, indicating a time offset for each subframe with a uniform value.

    Then the vertex shader will have to apply the competition based on a uniform time value.

  • Geometry shader - if you went with this, you could implement it just like # 2, except that instead the loop and variable would be implemented in the shader, which would help offload more work to the GPU.

You also note:

  • Rendering everything with VBO. If you used VBOs / display lists like this, basically you are making option # 1 with great hardware acceleration.
  • Problems with interpolation - perhaps you should not try to get accurate interpolation. If objects move very fast and bend, linear velocity interpolation (first order) is probably fine. You can improve it, including acceleration (second order), but additional orders or a more accurate physical model, which can be worth the effort or cost.
  • It’s not worth it to stand - this is really the essence of the problem. Depending on your application, hardware, and other details, any of these possible solutions may be ahead of the other. If performance is important, you should probably try a prototype implementation of each and run tests on the target devices to see what works best. (The sad reality is that you cannot easily run tests until you have completed all the work.)
+2
source

Source: https://habr.com/ru/post/1389770/


All Articles