Comparing Images and Labeling Differences C #

I am currently working on a project in which I need to write software that compares two images made up of the same area and draws a window around the differences. I wrote the program in C # .net a few hours later, but soon realized that it MUST be expensive. Here are the steps I followed.

  • A pixel class has been created that stores the x, y coordinates for each pixel and a PixelRectangle class that stores a list of pixels along with the width, height, x, and y properties.

  • Each pixel of each image is scrolled, comparing the color of each corresponding pixel. If the color were different, I created a new pixel object with the x, y coordinates of this pixel and added it to the list of pixel differences.

  • Next, I wrote a method that recursively checks each pixel in the pixelDifference list to create PixelRectangle objects that contain only pixels that are directly next to each other. (Pretty sure that this bad boy does most of the destruction, since he gave me a mistake.)

  • Then I designed the x, y coordinates and the dimensions of the rectangle based on the pixels that were saved in the PixelRectangle object list, and drew a rectangle above the original image to show where the differences were.

My questions are: am I doing this right? Will a square tree have any meaning for this project? If you could give me basic steps on how this is usually achieved, I would be grateful. Thanks in advance.

  • Dave
+4
source share
3 answers

it looks like you want to implement blob detection. my suggestion is not to reinvent the wheel and just use openCVSharp or emgu for this. google 'blob detection' and opencv

if you want to do it yourself, my 2 cents are:

First of all, specify what you want to do. there are really two different things:

  • calculate the difference between two images (I assume that they are the same sizes)

  • draw a square around the “areas” that are “different” as measured by 1. The questions here are 'area' and what is considered to be “different”.

my suggestion for each step:

(my guess is both grayscale images. If not, calculate the sum of the colors for each pixel to get a gray value)

1) cycle through all the pixels in both images and subtract them. set a threshold for the absolute difference to determine if there is enough difference to represent and actually change the scene (as opposed to sensor noise, etc., if the images are taken from the camera). then save the result in the third image. 0 no difference. 255 for a difference. if done correctly, it should be REALLY fast. however, in C # you should use pointers to get decent performance. here is an example of how to do this (note: the code is not verified !!):

/// <summary> /// computes difference between two images and stores result in a third image /// input images must be of same dimension and colour depth /// </summary> /// <param name="imageA">first image</param> /// <param name="imageB">second image</param> /// <param name="imageDiff">output 0 if same, 255 if different</param> /// <param name="width">width of images</param> /// <param name="height">height of images</param> /// <param name="channels">number of colour channels for the input images</param> unsafe void ComputeDiffernece(byte[] imageA, byte[] imageB, byte[] imageDiff, int width, int height, int channels, int threshold) { int ch = channels; fixed (byte* piA = imageB, piB = imageB, piD = imageDiff) { if (ch > 1) // this a colour image (assuming for RGB ch == 3 and RGBA == 4) { for (int r = 0; r < height; r++) { byte* pA = piA + r * width * ch; byte* pB = piB + r * width * ch; byte* pD = piD + r * width; //this has only one channels! for (int c = 0; c < width; c++) { //assuming three colour channels. if channels is larger ignore extra (as it likely alpha) int LA = pA[c * ch] + pA[c * ch + 1] + pA[c * ch + 2]; int LB = pB[c * ch] + pB[c * ch + 1] + pB[c * ch + 2]; if (Math.Abs(LA - LB) > threshold) { pD[c] = 255; } else { pD[c] = 0; } } } } else //single grey scale channels { for (int r = 0; r < height; r++) { byte* pA = piA + r * width; byte* pB = piB + r * width; byte* pD = piD + r * width; //this has only one channels! for (int c = 0; c < width; c++) { if (Math.Abs(pA[c] - pB[c]) > threshold) { pD[c] = 255; } else { pD[c] = 0; } } } } } } 

2)

not sure what you mean by area here. several solutions depending on what you mean. from the simplest to the most complex.

a) the color of each pixel is the difference in your output

b) provided that you have only one area of ​​difference (unlikely), calculate the bounding box of all 255 pixels in your output image. this can be done with a simple max / min for the x and y positions at all 255 pixels. one pass through the image and should be very fast.

c) if you have many different areas that are changing, calculate the “connected components”. it is a collection of pixels that are related to each other. of course, this only works in binary image (i.e. on or off, or 0 and 255, as in our case). you can implement this in c # and i have done this before. but I will not do it for you here. it is a bit related. Algorithms again opencv or google connected components .

after you have the CC list, draw a box around each. did.

+2
source

You pretty much follow this path. Step 3 should not throw a StackOverflow exception if it is implemented correctly, so I’ll take a closer look at this method.

What most likely happens is that your recursive check of each PixelDifference member is performed endlessly. Make sure you keep track of which pixels have been checked. Once you check a pixel, it no longer needs to be considered when checking neighboring pixels. Before checking any neighboring pixel, make sure that it has not yet been checked by itself.

As an alternative to keeping track of which pixels were checked, you can remove an element from the PixelDifference after checking it. Of course, this may require a change in the way you implement your algorithm, since removing an item from the list by checking it can lead to a whole new set of problems.

0
source

There it is much easier to find the difference between the two images.

So, if you have two images

 Image<Gray, Byte> A; Image<Gray, Byte> B; 

You can quickly find your differences.

 A - B 

Of course, images do not store negative values ​​in order to obtain differences in cases where the pixels in image B are larger than image A

 B - A 

Bringing them together

 (A - B) + (B - A) 

This is normal, but we can do even better.

This can be estimated using Fourier transforms.

 CvInvoke.cvDFT(A.Convert<Gray, Single>().Ptr, DFTA.Ptr, Emgu.CV.CvEnum.CV_DXT.CV_DXT_FORWARD, -1); CvInvoke.cvDFT(B.Convert<Gray, Single>().Ptr, DFTB.Ptr, Emgu.CV.CvEnum.CV_DXT.CV_DXT_FORWARD, -1); CvInvoke.cvDFT((DFTB - DFTA).Convert<Gray, Single>().Ptr, AB.Ptr, Emgu.CV.CvEnum.CV_DXT.CV_DXT_INVERSE, -1); CvInvoke.cvDFT((DFTA - DFTB).Ptr, BA.Ptr, Emgu.CV.CvEnum.CV_DXT.CV_DXT_INVERSE, -1); 

I believe the results of this method are much better. You can make a binary image from this, that is: a threshold image, so that pixels without changes have 0 and pixels with changes save 255.

Now, regarding the second part of the problem, I assume there is a simple rough solution:

Divide the image into rectangular areas. It may not be necessary to use ATVs. Say an 8x8 grid ... (For different results, you can experiment with different grid sizes).

Then use the convex hull in these regions. These convex hulls can be turned into rectangles by finding the minimum and maximum coordinates y of their vertices.

Quick and easy

0
source

Source: https://habr.com/ru/post/1491007/


All Articles