The difference between format and internal format

I searched and read material about this, but could not understand it.

What is the difference between internal format and texture format when calling type

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 32, 32, 0, GL_RGBA, GL_UNSIGNED_BYTE, data); 

Suppose data is an array of 32 x 32 pixel values, where for each pixel there are four bytes (unsigned char data 0-255) for red, green, blue and alpha.

What is the difference between the first GL_RGBA and the second? Why is GL_RGBA_INTEGER invalid in this context?

+5
source share
2 answers

The format (7th argument) along with the type argument describes the data that you pass as the last argument. Thus, the format / type combination determines the memory location of the data you transfer.

internalFormat (2nd argument) defines the format that OpenGL should use for internal data storage.

Often, the two will be very similar. Indeed, it is beneficial to make these two formats directly compatible. Otherwise, a conversion will occur when loading data, which may affect performance. Full OpenGL allows conversions requiring combinations, while OpenGL ES limits supported combinations, so in most cases no conversions are needed.

The reason GL_RGBA_INTEGER in this case is not legal if there are rules by which conversions between the format and the internal format are supported. In this case, GL_RGBA for innerFormat indicates the normalized format, and GL_RGBA_INTEGER for the format indicates that the input consists of values ​​that should be used as integers. There is no conversion between the two transformations.

While GL_RGBA for internalFormat is still supported for backward compatibility, dimensional types are commonly used for the internal format in modern versions of OpenGL. For example, if you want to save data as an 8-bit image to the RGBA component, the value for innerFormat is GL_RGBA8 .

Honestly, I think there would be cleaner ways to define these APIs. But the way it is. Partially, it evolved in such a way as to maintain backward compatibility with versions of OpenGL, where functions were much more limited. Newer versions of OpenGL add glTexStorage*() entry glTexStorage*() , which makes some of them more enjoyable as it separates the internal data distribution and data specification.

+8
source

The internal format describes how the texture should be stored in the GPU. The format describes how the format of your pixel data is in client memory (along with the type parameter).

Please note that the internal format defines both the number of channels (from 1 to 4) and the data type, and for the pixel data in the client memory, both are specified through two separate parameters.

GL converts your pixel data into an internal format. If you need efficient texture loading, you should use the appropriate formats so that conversion is not required. But keep in mind that most GPUs store texture data in a BGRA order, this is still represented by the GL_RBGA internal format - the internal format only determines the number of channels and data type, the internal layout is completely specific to the GPU, however this means for maximum performance It is often recommended to use GL_BGRA as the format of your pixel data in client memory.

Suppose the data is an array of 32 x 32 pixel values, where four bytes per pixel (unsigned char data are 0-255) for red, green, blue, and alpha. What is the difference between the first GL_RGBA and the second?

First, internalFormat tells GL that it should store the texture as 4 channels (RGBA) with a normalized integer in preferred precision (8 bits per channel). Second, the format tells Gl that you provide 4 channels per pixel in the order of R, G, B, A.

For example, you can provide data in the form of 3-channel RGB data, and GL will automatically expand it to RGBA (with a setting of A to 1) if the internal format remains in RGBA. You can also supply only the red channel.

Another way, if you use GL_RED as internalFormat, GL ignores the GB and A-channel in your input.

Also note that data types will be converted. If you provide an RGB pixel with a 32-bit float per channel, you can use GL_FLOAT . However, when you still use the GL_RGBA internal format, GL converts them to normalized integers from 8 bpit per channel, so extra precision is lost. If you want GL to use floating point precision, you would also have to use a floating point texture format like GL_RGBA32F .

Why is GL_RGBA_INTEGER invalid in this context?

The _INTEGER formats are for unnormalized whole textures. There is no automatic conversion in GL for whole textures. You must use an integer internal format, and you must specify your pixel data with some _INTEGER format, otherwise this will result in an error.

+4
source

Source: https://habr.com/ru/post/1239364/


All Articles