So, I want to draw a lot of squares (or even cubes) and came across this beautiful thing called a geometric shader.
I somehow understand how this works now, and I could probably manipulate it in drawing a cube for each vertex in the vertex buffer, but I'm not sure if he did it right. The geometric shader occurs between the vertex shader and the fragment shader, so it works with vertices in the screen space . But I need them to make transformations in the world.
So, is it normal that my vertex shader simply connects the inputs to the geometric shader, and the geometric shader is multiplied by the modelviewproj matrix after creating the primitives? This should not be a problem for a unified shader architecture, but I still feel nauseous when redundant shader shader backups.
Are there any alternatives? Or is this really the βrightβ way to do this?
source share