I'm just learning OpenGL, and I understand that for more complex shader effects, there are two strategies that you can use to implement them. The first is to write one complex vertex and fragment shader that takes many different uniform variables from the main program and decides what to do in the shader. Ex. if I want to make pixels in one context blue and in another context green, I could do it in GLSL:
uniform int whichColor; int main() { if (whichColor == 1) { gl_FragColor = vec4(0,1.0,0,1.0); } else { gl_FragColor = vec4(0,0,1.0,1.0); } }
and pass in different ints for which Color in my C ++ drawing cycle. Alternatively, I could just define two separate shader programs, one of which sets gl_FragColor to blue, and the other to green and just loads one or the other when it is time for me to draw a specific object in my OpenGL scene.
I cannot find any information on which one of these strategies is better. I could expect that the first strategy, which puts all the complexity of what to do on the graphics card, will be more productive, but in those cases when I'm going to decide what to do, this is not a pixel calculation, but to get the benefit of parallelization. I donβt quite understand how shader programs work on a graphic card and what are the costs of linking and passing variables, so I'm not sure that my intuition is actually true here.
source share