The internal format describes how the texture should be stored in the GPU. The format describes how the format of your pixel data is in client memory (along with the type parameter).
Please note that the internal format defines both the number of channels (from 1 to 4) and the data type, and for the pixel data in the client memory, both are specified through two separate parameters.
GL converts your pixel data into an internal format. If you need efficient texture loading, you should use the appropriate formats so that conversion is not required. But keep in mind that most GPUs store texture data in a BGRA order, this is still represented by the GL_RBGA internal format - the internal format only determines the number of channels and data type, the internal layout is completely specific to the GPU, however this means for maximum performance It is often recommended to use GL_BGRA as the format of your pixel data in client memory.
Suppose the data is an array of 32 x 32 pixel values, where four bytes per pixel (unsigned char data are 0-255) for red, green, blue, and alpha. What is the difference between the first GL_RGBA and the second?
First, internalFormat tells GL that it should store the texture as 4 channels (RGBA) with a normalized integer in preferred precision (8 bits per channel). Second, the format tells Gl that you provide 4 channels per pixel in the order of R, G, B, A.
For example, you can provide data in the form of 3-channel RGB data, and GL will automatically expand it to RGBA (with a setting of A to 1) if the internal format remains in RGBA. You can also supply only the red channel.
Another way, if you use GL_RED as internalFormat, GL ignores the GB and A-channel in your input.
Also note that data types will be converted. If you provide an RGB pixel with a 32-bit float per channel, you can use GL_FLOAT . However, when you still use the GL_RGBA internal format, GL converts them to normalized integers from 8 bpit per channel, so extra precision is lost. If you want GL to use floating point precision, you would also have to use a floating point texture format like GL_RGBA32F .
Why is GL_RGBA_INTEGER invalid in this context?
The _INTEGER formats are for unnormalized whole textures. There is no automatic conversion in GL for whole textures. You must use an integer internal format, and you must specify your pixel data with some _INTEGER format, otherwise this will result in an error.