How to fix uneven lighting in images using MATLAB?

I am performing function detection in a video using MATLAB. Lighting conditions vary in different parts of the video, which causes some parts to be ignored when converting RGB images to binary images.

The lighting state in a specific part of the video also changes during the video.

Can you suggest a better method in MATLAB to balance lighting frame and video?

+6
source share
3 answers

You have two options, depending on what features you want to detect and what you want to do with the video.

  • Ignore the illumination of images because (as you did) this contains useless or even misleading information to detect your function.
  • Try to adjust the unevenness of the lighting (this is what you are asking for).

1) It’s very easy to do: convert the image to a colorspace that separates the backlight in a separate channel, for example: HSV (ignore V channel) Lab (ignore L) YUV (ignore Y) and detect your function on the two remaining channels. Of these HSVs, the best (as Yves Daust notes in the comments), YUV and Lab leave information about the lighting in the UV / ab channels. In my experience, the last two work depending on your situation, but better HSV.

2) More difficult. I would start by converting the image to HSV. Then you perform the repair only on the V-channel:

  • Apply gaussian blur to the V-channel image with a very large value for sigma. This gives you the local average for lighting. Calculate the global average V for this image (this is one number). Then subtract the local average from the actual V value for each pixel and add the global average. Now you have made a very rough alignment of lighting. You can play around with the sigma value a bit to find the best one that works best.
  • If this fails, take a look at the zenopy gives option in your answer.

Whatever method you choose, I advise you to focus on what you want to do (that is, discover the functions) and choose intermediate steps, such as those that are enough for your needs. So try something quickly, see how it helps to discover your function,

+7
source

This is not a trivial task, but there are many ways to overcome it. I can recommend that you start with the implementation of the retinex algorithm or use the implementation of others: <a2> .

The basic idea is that brightness (observed image intensity) = illumination (incident light) x reflector (percentage reflected):

L(x,y) = I(x,y) x R(x,y) 

And you are interested in part R.

To work with color images for each frame, first go to the hsv color space and use the retinex on part v (value).

Hope this makes sense.

+5
source

In addition to uneven lighting in individual images addressed by Retinex or high-pass filtering, you can come up with automatic video gain correction.

The idea is to normalize the image intensities by applying a linear transformation to the color components so that the average and standard deviations of all three channels are combined into predefined values ​​(average value β†’ 128, standard deviation β†’ 64).

Histogram equalization will have a similar effect of β€œstandardizing” intensity levels.

Unfortunately, large changes in the scene will affect this process in such a way that the background intensities do not remain constant, as you would expect them.

+5
source

Source: https://habr.com/ru/post/908504/


All Articles