Creating an OpenGL Texture from an SDL2 Surface - Weird Pixel Values

I am trying to use SDL2 to load textures to render OpenGL Wavefront objects (I am currently testing with a fixed pipeline, but will ultimately plan to switch to shaders). The problem is that the loaded texture applied to the quad (and the model that uses a small part in the lower right corner of the texture) looks like this:

A sample of the effect
(source: image-upload.de )

This is the texture that I used

The image loads perfectly and looks completely normal when displayed using the SDL functions, so the conversion to OGL texture is probably interrupted. Please note that I have alpha blending turned on and the texture is still completely opaque - so the values ​​are not completely random and are probably uninitialized in memory. This is my surface conversion code (compiled from various lessons and questions on this site here):

GLuint glMakeTexture(bool mipmap = false, int request_size = 0) { // Only works on 32 Bit Surfaces GLuint texture = 0; if ((bool)_surface) { int w,h; if (request_size) { // NPOT and rectangular textures are widely supported since at least a decade now; you should never need this... w = h = request_size; if (w<_surface->w || h<_surface->h) return 0; // No can do. } else { w = _surface->w; h = _surface->h; } SDL_LockSurface(&*_surface); std::cout<<"Bits: "<<(int)_surface->format->BytesPerPixel<<std::endl; Uint8 *temp = (Uint8*)malloc(w*h*sizeof(Uint32)); // Yes, I know it 4... if (!temp) return 0; // Optimized code /*for (int y = 0; y<h; y++) { // Pitch is given in bytes, so we need to cast to 8 bit here! memcpy(temp+y*w*sizeof(Uint32),(Uint8*)_surface->pixels+y*_surface->pitch,_surface->w*sizeof(Uint32)); if (w>_surface->w) memset(temp+y*w*sizeof(Uint32)+_surface->w,0,(w-_surface->w)*sizeof(Uint32)); } for (int y = _surface->h; y<h; y++) memset(temp+y*w*sizeof(Uint32),0,w*sizeof(Uint32)); GLenum format = (_surface->format->Rmask==0xFF)?GL_RGBA:GL_BGRA;*/ // Naive code for testing for (int y = 0; y<_surface->h; y++) for (int x = 0; x<_surface->w; x++) { int mempos = (x+y*w)*4; SDL_Color pcol = get_pixel(x,y); temp[mempos] = pcol.r; temp[mempos+1] = pcol.g; temp[mempos+2] = pcol.b; temp[mempos+3] = pcol.a; } GLenum format = GL_RGBA; SDL_UnlockSurface(&*_surface); glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); if (mipmap) glTexParameteri(texture, GL_GENERATE_MIPMAP, GL_TRUE); glTexImage2D(GL_TEXTURE_2D, 0, format, w, h, 0, format, GL_UNSIGNED_BYTE, temp); if (mipmap) glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); else glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); free(temp); // Always clean up... } return texture; } 

UPDATE: _surface is actually std :: shared_ptr for SDL_Surface. So & * when (not) blocks it.

By the way, SDL claims that the surface is formatted as 32-bit RGBA on my machine, I already checked it.

The code that ties the texture and draws a quad: here:

 glEnable(GL_TEXTURE_2D); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); glBindTexture(GL_TEXTURE_2D,_texture[MAP_KD]); static bool once = true; if (once) { int tex; glGetIntegerv(GL_TEXTURE_BINDING_2D, &tex); bool valid = glIsTexture(tex); std::cout<<tex<<" "<<valid<<std::endl; once = false; } glBegin(GL_TRIANGLE_STRIP); //glColor3f(1.f,1.f,1.f); glNormal3f(0,1,0); glTexCoord2f(0.f,0.f); glVertex3f(0,0,0); glTexCoord2f(0.f,1.f); glVertex3f(0,0,1); glTexCoord2f(1.f,0.f); glVertex3f(1,0,0); glTexCoord2f(1.f,1.f); glVertex3f(1,0,1); glEnd(); 

The ax is pulled later from the list of indices; the code is too long to share here (and, moreover, it works great except for texture).

I also tried the naive method, which can be found in many manuals on passing _surface-> pixels to glTexImage2D (), but that doesn’t help either (and I heard that it’s wrong anyway), because pitch! = Width * BytesPerPixel as a whole). By the way, the "optimized" code looks exactly the same, by the way (as expected). I wrote the bottom for better testing. Setting all pixels to a specific color or creating a partially transparent texture works as expected, so I assume that OpenGL loads the values ​​correctly in temp. This is probably my understanding of the memory structure in SDL2 Surfaces, which has gone bad.

FINAL EDITING: Solution (Peter Clark's reset, GL_UNPACK_ROW_LENGTH is the key):

 GLuint glTexture(bool mipmap = false) { GLuint texture = 0; if ((bool)_surface) { GLenum texture_format, internal_format, tex_type; if (_surface->format->BytesPerPixel == 4) { if (_surface->format->Rmask == 0x000000ff) { texture_format = GL_RGBA; tex_type = GL_UNSIGNED_INT_8_8_8_8_REV; } else { texture_format = GL_BGRA; tex_type = GL_UNSIGNED_INT_8_8_8_8; } internal_format = GL_RGBA8; } else { if (_surface->format->Rmask == 0x000000ff) { texture_format = GL_RGB; tex_type = GL_UNSIGNED_BYTE; } else { texture_format = GL_BGR; tex_type = GL_UNSIGNED_BYTE; } internal_format = GL_RGB8; } int alignment = 8; while (_surface->pitch%alignment) alignment>>=1; // x%1==0 for any x glPixelStorei(GL_UNPACK_ALIGNMENT,alignment); int expected_pitch = (_surface->w*_surface->format->BytesPerPixel+alignment-1)/alignment*alignment; if (_surface->pitch-expected_pitch>=alignment) // Alignment alone wont't solve it now glPixelStorei(GL_UNPACK_ROW_LENGTH,_surface->pitch/_surface->format->BytesPerPixel); else glPixelStorei(GL_UNPACK_ROW_LENGTH,0); glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glTexImage2D(GL_TEXTURE_2D, 0, internal_format, _surface->w, _surface->h, 0, texture_format, tex_type, _surface->pixels); if (mipmap) { glGenerateMipmap(GL_TEXTURE_2D); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); } else { glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); } glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glPixelStorei(GL_UNPACK_ALIGNMENT,4); glPixelStorei(GL_UNPACK_ROW_LENGTH,0); } return texture; } 
+8
source share
1 answer

Pixel Storage Alignment

You must tell OpenGL that image alignment with glPixelStorei(GL_UNPACK_ALIGNMENT, [1,2,4,8] ) . This will be the greatest power of 2, which is the divisor of the step to 8. If this is not one of the accepted values, you may need to additionally set GL_UNPACK_ROW_LENGTH - see this one for more information and tips on this topic . It should be noted: GL_UNPACK_ROW_LENGTH - the length of the string in pixels , where SDL_Surface::pitch - the length of the string in bytes . In addition, you must ensure that internal_format, format, and pixel_type are set according to what the SDL_Surface contains. Another resource on this topic.

Full textures

You also do not create a full texture when not using mipmaps . To create a “full” texture (one that is ready to be read or written) without mipmaps, you must specify that the maximum mipmap level is 0 (base image) using glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0) , since the default is 1000.

Just note: you use glTexParameteri(texture, GL_GENERATE_MIPMAP, GL_TRUE) to automatically create mipmaps. Although this should work (although I am not familiar with it), be aware that this method has been deprecated in favor of glGenerateMipmaps in modern OpenGL.

Possible Solution

 // Load texture into surface... // Error check.. // Bind GL texture... // Calculate required align using pitch (largest power of 2 that is a divisor of pitch) glPixelStorei(GL_UNPACK_ALIGNMENT, align); //glPixelStorei(GL_UNPACK_ROW_LENGTH, row_length); // row_length = pitch / bytes_per_pixel glTexImage2D( GL_TEXTURE_2D, 0, internal_format, sdl_surface->w, sdl_surface->h, 0, format, pixel_type, sdl_surface->pixels); // Check for errors if(use_mipmaps) { glGenerateMipmap(GL_TEXTURE_2D); // Check for errors glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, /* filter mode */); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, /* filter mode */); // Check for errors } else { // This makes the texture complete glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, /* filter mode */); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, /* filter mode */); // Check for errors } // Potentially reset GL_UNPACK_ALIGNMENT and/or GL_UNPACK_ROW_LENGTH to their default values // Cleanup 

Error checking

Note that it would be nice to add some error checking with glGetError() , where I noted Check for errors . Perhaps you can print the error, if any, and then place a breakpoint / assert. I usually use a macro for this, so I can turn off a lot of error checking in release builds - something like an effect:

 #ifdef MYPROJ_GRAPHICS_DEBUG #define ASSERT_IF_GL_ERROR \ { \ GLenum last_error = glGetError(); \ if(last_error != GL_NO_ERROR); \ { \ printf("GL Error: %d", last_error); \ } \ __debugbreak(); // Visual Studio intrinsic - other compilers have similar intrinsics } #else #define ASSERT_IF_GL_ERROR #endif 

It is always useful to check for errors , and this may show some information about what is happening. Although, since it sounds like the driver crashes after some undefined behavior, it may not be in this case.

Possible alternative

I think it's worth mentioning that I did not know about this problem before answering this question. I did not come across this because I usually used stb_image to load textures. The reason I am revealing it is because the documentation for stb_image states that "There is no spacing between image scan lines or between pixels, regardless of format." which means stb_image . for you. If you can manage the images you have to upload (say, if you make a game and control the creation of assets) stb_image is another option to upload an image.

+4
source

Source: https://habr.com/ru/post/975083/


All Articles