OpenGL Computes Shader - Strange Results

I am trying to implement a multiprocessor shader for image processing. Each passage has an input image and an output image. The input image of the next pass is the previous ones.

This is the first time I use compute shader in OpenGL, so there may be some problems with my setup. I use OpenCV Mat as a container for read / copy operations.

There are some parts of the code that are not related to the problem, so I did not include it. Some of these parts include image loading or context initialization.

Initialization:

//texture init glGenTextures(1, &feedbackTexture_); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, feedbackTexture_); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glBindTexture(GL_TEXTURE_2D, 0); glGenTextures(1, &resultTexture_); glActiveTexture(GL_TEXTURE0+1); glBindTexture(GL_TEXTURE_2D, resultTexture_); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glBindTexture(GL_TEXTURE_2D, 0); // shader init computeShaderID = glCreateShader(GL_COMPUTE_SHADER); glShaderSource(computeShaderID, 1, &computeShaderSourcePtr, &computeShaderLength); glCompileShader(computeShaderID); programID = glCreateProgram(); glAttachShader(programID, computeShaderID); glLinkProgram(programID); glDeleteShader(computeShaderID); 

Shader Code:

 //shader code (simple invert) #version 430 layout (local_size_x = 1, local_size_y = 1) in; layout (location = 0, binding = 0, /*format*/ rgba32f) uniform readonly image2D inImage; layout (location = 1, binding = 1, /*format*/ rgba32f) uniform writeonly image2D resultImage; uniform writeonly image2D image; void main() { // Acquire the coordinates to the texel we are to process. ivec2 texelCoords = ivec2(gl_GlobalInvocationID.xy); // Read the pixel from the first texture. vec4 pixel = imageLoad(inImage, texelCoords); pixel.rgb = 1. - pixel.rgb; imageStore(resultImage, texelCoords, pixel); } 

Using:

 cv::Mat image = loadImage().clone(); cv::Mat result(image.rows,image.cols,image.type()); // These get the appropriate enums used by glTexImage2D GLenum internalformat = GLUtils::getMatOpenGLImageFormat(image); GLenum format = GLUtils::getMatOpenGLFormat(image); GLenum type = GLUtils::getMatOpenGLType(image); int dispatchX = 1; int dispatchY = 1; for ( int i = 0; i < shaderPasses_.size(); ++i) { // Update textures glBindTexture(GL_TEXTURE_2D, feedbackTexture_); glTexImage2D(GL_TEXTURE_2D, 0, internalformat, result.cols, result.rows, 0, format, type, result.data); glBindTexture(GL_TEXTURE_2D, resultTexture_); glTexImage2D(GL_TEXTURE_2D, 0, internalformat, image.cols, image.rows, 0, format, type, 0); glBindTexture(GL_TEXTURE_2D, 0); glClear(GL_COLOR_BUFFER_BIT); std::shared_ptr<Shader> shaderPtr = shaderPasses_[i]; // Enable shader shaderPtr->enable(); { // Bind textures // location = 0, binding = 0 glUniform1i(0,0); // binding = 0 glBindImageTexture(0, feedbackTexture_, 0, GL_FALSE, 0, GL_READ_ONLY, internalformat); // location = 1, binding = 1 glUniform1i(1,1); // binding = 1 glBindImageTexture(1, resultTexture_, 0, GL_FALSE, 0, GL_WRITE_ONLY, internalformat); // Dispatch rendering glDispatchCompute((GLuint)image.cols/dispatchX,(GLuint)image.rows/dispatchY,1); // Barrier will synchronize glMemoryBarrier(GL_TEXTURE_UPDATE_BARRIER_BIT); } // disable shader shaderPtr->disable(); // Here result is now the result of the last pass. } 

Sometimes I get strange results (buggy textures, partially rendered textures), and also the first pixel (at 0.0) is sometimes not recorded. Am I set up correctly or is something missing? This texture method seems to be really slow, is there an alternative that will increase performance?

Edit1: Changed the flag of the memory basket.

+5
source share
3 answers

I could solve this problem completely!

The problem is with the cv :: Mat constructor. The following line only creates a header for cv :: Mat:

 cv::Mat result(image.rows,image.cols,image.type()); 

selects data, but DOES NOT initialize this data, so I got these strange results. It was garbage in mind.

Using any function that allocates AND initializes this data solves the problem:

 cv::Mat::zeros cv::Mat::ones cv::Mat::create 
+3
source
 glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT); 

This is the wrong barrier. The barrier indicates how you intend to access data after incoherent access . If you are trying to read a texture using glGetTexImage , you should use GL_TEXTURE_UPDATE_BARRIER_BIT .

+5
source

I am not 100% sure if this will fix your problem or not; but I don't see anything wrong with your flags to initialize the texture settings. When I compared your code with my project, it drew attention to API calls. In your source you have this order:

 glGenTextures(...); // Generate glActiveTexture(...); // Set Active glBindTexture(...); // Bind Texture glTexParameteri(...); // Wrap Setting glTexParameteri(...); // Wrap Setting glTexParameteri(...); // Mipmap Setting glTexParameteri(...); // Mipmap Setting glBindTexture(...); // Bind / Unbind 

and you repeat this for each texture, except for passing the texture variable and increasing the id value.

I do not know if this will matter, but with my engine and the logical path that I set; try to do it in this order and see it doesn't matter

 glGenTextures(...); // Generate glBindTexture(...); // Bind Texture glTexParameteri(...); // Wrap Setting glTexParameteri(...); // Wrap Setting glTexParameteri(...); // Mipmap Setting glTexParameteri(...); // Mipmap Setting glActiveTexture(...); // Set Active glBindTexture(...); // Bind / Unbind 

I do not use computational shaders, but inside my engine I have several classes that manage different things. I have an asset repository that will save all assets to a memory database, including textures for images, I have a ShaderManager class for managing different shaders that currently only use vertex and fragment shaders. It will read and compile shader files, create shader programs, set attributes and uniforms, link programs and run shaders. I use a batch process where I have a batch class and a package manager class to render various types of primitives. Therefore, when I went through my decision and following a path or logic, this is what I saw in my code.

It was the AssetStorage class that set up the texture properties and called these API calls in that order within its add() function to add textures to memory.

  glGenTextures(...); glBindTextures(...); glTexParameteri(...); glTexParameteri(...); glTexParameteri(...); glTexParameteri(...); 

Then AssetStorage also called them

 glPixelStorei(...); glTexImage2D(...) 

And the function of adding textures in AssetStorage will finally return the user structure of the TextureInfo object.

When I checked my Batch class under its call to the render() function, this is where it called the ShaderManager function to set the uniform to use the textures, then called the ShaderManager function to set the texture, and then set it even again if the texture contained alpha channel. In the ShaderManger class for the setTexture() function, this is where glActiveTexture() and glBindTexture() finally called.

So, in a brief summary, try moving your glActiveTexture() call between the last glTexParameter() and the last glBindTexture() calls for both textures. I think that this should also appear after these two calls, as well as glPixelStorei() and glTexImage2D() because you want to make the texture active the same way you are going to render it.

As I mentioned earlier, I'm not 100% sure if this is the main cause of your problem, but I think it's worth a try, trying to figure out if it helps you or not. Please let me know what happens if you try this. I would like to know for myself if the ordering of these API calls has some effect on it. I would try this in my own solution, but I do not want to break my classes or project because it is currently working correctly.

As a side note, the only one with flags for your texture settings is in the wrap / repeat sections. Instead, you can use GL_REPEAT for the first two calls to glTexParameteri() instead of GL_CLAMP_TO_EDGE and tell me what you came up with, you do not need to worry about the mipmap settings for the last two glTexParameteri() because it seems that you are not using mipmaps from the settings that you use.

+3
source

Source: https://habr.com/ru/post/1258947/


All Articles