Let's say you work with RGB colors: each color is represented by three intensities or brightnesses. You must choose between linear RGB and sRGB. For now, we will simplify everything by ignoring three different intensities and suppose that you have only one intensity: that is, you are dealing only with shades of gray.
In a linear color space, the relationship between the numbers you store and the intensity values ​​that they represent is linear. In practice, this means that if you double the number, you double the intensity (lightness of gray). If you want to add two intensities together (because you calculate the intensity based on the contribution of two light sources, or because you add a transparent object on top of an opaque object), you can do this by simply adding two numbers together. If you are doing any kind of 2D blending or 3D shading or almost any image processing, then you want your intensities in a linear color space , so you can just add, subtract, multiply and divide the numbers to have the same effect on intensity. Most color processing and rendering algorithms give only the correct linear RGB results, unless you add extra weight to everything.
It sounds very simple, but there is a problem. The sensitivity of the human eye to light is more subtle at low intensities than high intensity. To say, if you make a list of all the intensities that you can distinguish, the more dark than light. In other words, you can say dark shades of gray separately than you can, with light shades of gray. In particular, if you use 8 bits to represent your intensity, and you do it in a linear color space, you will get too many light shades and not enough dark shades. You get a band in dark areas, while in your bright areas you lose a bit in different shades of almost white that the user cannot distinguish from each other.
To avoid this problem and make the best use of these 8 bits, we tend to use sRGB. The sRGB standard tells you a curve to make your colors non-linear. The curve is smaller at the bottom, so you can have darker grays and cooler at the top, so you have less light gray. If you double the number, you double the intensity. This means that if you add sRGB colors together, you will get a result that will be easier than it should be. Nowadays, most monitors interpret their input colors as sRGB. So, when you put the color on the screen or save it in a texture with 8 bits per channel, store it as sRGB so that you use these 8 bits as efficiently as possible.
You will notice that we now have a problem: we want our colors to be processed in linear space, but saved in sRGB. This means that you end up doing sRGB-to-linear conversion when reading and linear-to-sRGB conversion when writing. As we have already said, linear 8-bit intensities do not have enough dark shades, this can cause problems, so there is another practical rule: do not use 8-bit linear colors if you can avoid this. It becomes common to follow the rule that 8-bit colors are always sRGB, so you perform sRGB-to-linear conversion at the same time as the intensity increases from 8 to 16 bits or from integer to floating; Similarly, when you finish floating point processing, you reduce it to 8 bits at the same time as converting to sRGB. If you follow these rules, you will never have to worry about gamma correction.
When you read an sRGB image and want linear intensity, apply this formula to each intensity:
float s = read_channel(); float linear; if (s <= 0.04045) linear = s / 12.92; else linear = pow((s + 0.055) / 1.055, 2.4);
Going the other way, when you want to write an image as sRGB, apply this formula to each linear intensity:
float linear = do_processing(); float s; if (linear <= 0.0031308) s = linear * 12.92; else s = 1.055 * pow(linear, 1.0/2.4) - 0.055; ( Edited: The previous version is -0.55 )
In both cases, the floating point value changes from 0 to 1, so if you are reading 8-bit integers that you want to divide by 255, and if you are writing 8-bit integers, multiply the last by 255, just like usual . This is all you need to know to work with sRGB.
So far, I have been dealing with only one intensity, but there are more smart things related to colors. The human eye can distinguish different brightnesses from each other better than different shades (more technically it has a better brightness resolution than color), so you can use your 24 bits more efficiently, keeping the brightness separate from the hue. This is what YUV, YCrCb, etc. do. Representation. The Y channel is the overall lightness of color and uses more bits (or has a higher spatial resolution) than the other two channels. That way, you don’t (always) need to apply a curve like you with RGB intensity. YUV is a linear color space, so if you double the number in the Y channel, you double the brightness of the color, but you cannot add or multiply the YUV colors together as you can with RGB colors, so it is not used for image processing, only for storage and transfer.
I think this answers your question, so I will end with a quick historical note. Prior to sRGB, older CRTs used built-in nonlinearity. If you doubled the voltage for a pixel, you would more than double the intensity. How many were different for each monitor, and this parameter was called gamma. This behavior was useful because it meant that you could get more dark than lights, but it also meant that you could not tell how bright your colors would be on a custom CRT unless you calibrated it first. Gamma correction means converting the colors you start with (possibly linearly) and converting them to the gamut of custom CRT. OpenGL comes from this era, so its sRGB behavior is sometimes a bit confusing. But now, GPU developers are working with the convention described above: when you store 8-bit intensity in a texture or framebuffer, it is sRGB, and when you process colors, it is linear. For example, OpenGL ES 3.0, each framebuffer and texture has an "sRGB flag" that you can enable to enable automatic conversion when reading and writing. You do not need to explicitly perform sRGB conversion or gamma correction.