You have two options, depending on what features you want to detect and what you want to do with the video.
- Ignore the illumination of images because (as you did) this contains useless or even misleading information to detect your function.
- Try to adjust the unevenness of the lighting (this is what you are asking for).
1) Itβs very easy to do: convert the image to a colorspace that separates the backlight in a separate channel, for example: HSV (ignore V channel) Lab (ignore L) YUV (ignore Y) and detect your function on the two remaining channels. Of these HSVs, the best (as Yves Daust notes in the comments), YUV and Lab leave information about the lighting in the UV / ab channels. In my experience, the last two work depending on your situation, but better HSV.
2) More difficult. I would start by converting the image to HSV. Then you perform the repair only on the V-channel:
- Apply gaussian blur to the V-channel image with a very large value for sigma. This gives you the local average for lighting. Calculate the global average V for this image (this is one number). Then subtract the local average from the actual V value for each pixel and add the global average. Now you have made a very rough alignment of lighting. You can play around with the sigma value a bit to find the best one that works best.
- If this fails, take a look at the zenopy gives option in your answer.
Whatever method you choose, I advise you to focus on what you want to do (that is, discover the functions) and choose intermediate steps, such as those that are enough for your needs. So try something quickly, see how it helps to discover your function,
source share