OpenGL - low performance at high scale values

I observe a strange phenomenon with my OpenGL program, which is written in C # / OpenTK / core-profile. When displaying mandelbrot data from a height map with vertices of ~ 1M, the performance differs depending on the scale value of my view matrices (this is spelling so yes, I need a scale). Data is output using VBO. The rendering process includes lighting and shadow maps.

My only assumption is that there is something in the "errors" of the shaders at low values โ€‹โ€‹of the scale and there is some error handling. Any clues for me?

Examples:

Example 1Example 2

+6
source share
1 answer

This is not unusual. At lower scale values, your grid does not cover most of the screen, so it does not produce too many fragments. On a large scale, the entire screen is covered with your grid and, even worse, congestion becomes a huge factor.

In this scenario, you are tied to fragments to reduce the complexity of the fragment shader, and also provide a preliminary pass Z to reduce redundancy.

+15
source

Source: https://habr.com/ru/post/954151/


All Articles