The camera chip converts a given wavelength of light into a signal by applying color filters - red, green, and blue - to subpixel sensors that are sensitive to a wide range of wavelengths. Thus, the camera does not actually perceive the wavelength; its sensitivity is the relative luminous intensity for a pair of key peak wavelengths. As described in this answer , you can approximate the maximum wavelength of a given RGB color by converting it to HSV (hue / saturation / value), and then interpolate from purple to red wavelength over the hue component. You will find that it has limitations: fuchsia, for example (between red and violet), does not have a single wavelength associated with it, like its color, which we perceive while observing both reddish and bluish.
source share