Why draws an 8-bit texture using OpenGL, painting black pixels instead of transparent?

Setting up OpenGL:

glEnable( GL_BLEND ); glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA ); glEnable( GL_DEPTH_TEST ); glEnable( GL_TEXTURE_2D ); glEnable( GL_CULL_FACE ); glCullFace( GL_BACK ); glClearColor( 0.0, 1.0, 1.0, 1.0 ); 

Texture initialization:

 // 16x16 X pattern uint8_t buffer[ 16*16 ] = { 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, }; GLuint texture_id; glGenTextures( 1, &texture_id ); glBindTexture( GL_TEXTURE_2D, texture_id ); glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA8, 16, 16, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, buffer ); glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR ); glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR ); 

Textures are drawn as ATVs using glBegin / glEnd. The color of each vertex is white with full alpha: {r = 255, g = 255, b = 255, a = 255}.

Here is an example scene. Photography and cheese are downloaded from PNG images. The cheese has transparent holes that show the photograph and background. I would like the X template to be transparent:

Example

Why is the square black instead of transparent, anyone, how can I fix my code to draw what I expected?

This question may be similar, but so far I can not apply a short answer to my current problem.

Update: I think I solved this, thanks to the answers below. I changed...

 glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA8, 16, 16, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, buffer ); 

to

 glTexImage2D( GL_TEXTURE_2D, 0, GL_ALPHA, 16, 16, 0, GL_ALPHA, GL_UNSIGNED_BYTE, buffer ); 

... which gives the desired result.

+5
source share
4 answers

@Dietrich Epp's answer was almost correct, only the format you are looking for is GL_ALPHA:

 glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA8, 16, 16, 0, GL_ALPHA, GL_UNSIGNED_BYTE, buffer); 

Then set the blending function to glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); or glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA); , depending on whether your values ​​were previously multiplied. Last but not least, set the texture environment to GL_MODULATE

 glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); 

Now you can set the color to "X" with glColor.

Another approach does not use blending, but alpha testing.

+6
source

Unused brightness is color, and transparency (alpha) is not color.

If you want transparency, use a different format - GL_LUMANANCE_ALPHA or something similar, and keep the transparency in different channels.

In addition, this is already explained in the documentation .

GL_LUMINANCE

Each element represents one brightness value. GL converts it to a floating point, and then collects it into an RGBA element, repeating the brightness value three times for red, green, and blue and attaching 1 for alpha . Then each component is multiplied by the signed scale factor GL_c_SCALE, added to the signed offset GL_c_BIAS and clamped in the range [0,1] (see GlPixelTransfer).

- edit -

Any way to save 8 bits per pixel and achieve the same effect?

I think you could β€œtell” OpenGL that the initial image is color-indexed and then the correct RGBA palette is set up (where for each element R == G == B == A == index). See glPixelTransfer and glPixelMap .

+3
source

Change GL_RGBA8 (which makes grayscale images with no alpha ) to GL_INTENSITY :

 glTexImage2D(GL_TEXTURE_2D, 0, GL_INTENSITY, 16, 16, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, buffer); 

The GL_RGBA8 format created using GL_LUMINANCE gives the pixels of the form (Y, Y, Y, 1) , but GL_INTENSITY with GL_LUMINANCE gives (Y, Y, Y, Y) .

You will also want to change your blending mode to suggest that pre-multiplied alpha, for example. the change

 glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); 

in

 glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA); 

Alternative:

You can also use GL_ALPHA and then use the normal blending mode for non-multiplex alpha:

 glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, 16, 16, 0, GL_ALPHA, GL_UNSIGNED_BYTE, buffer); 

Alternative No. 2:

You can continue to use GL_LUMINANCE and change the blending mode.

 glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_COLOR); 

This has the disadvantage that you cannot paint the texture without using something like glBlendColor (which is not part of the OpenGL headers that come with MSVC, so you need to use GLEW or something like that):

 glBlendColor(...); glBlendFunc(GL_CONSTANT_COLOR, GL_ONE_MINUS_SRC_COLOR); 

Alternative No. 3:

Use OpenGL 3 and modify your fragment shader to process single-channel textures as desired.

+3
source

Eight bitmaps do not use colors; they have a color palette. Zero represents the index 0 of the color palette for this image, which will contain 256 colors. Eight bitmaps do not have an alpha channel for transparency. The PNG image you used has an alpha channel. This will be a 32-bit image that has 8 bits for red, 8 bits for green, 8 bits for blue (24 bits for color) and 8 bits for alpha. You are mixing two formats.

0
source

Source: https://habr.com/ru/post/1390382/


All Articles