it looks like you want to implement blob detection. my suggestion is not to reinvent the wheel and just use openCVSharp or emgu for this. google 'blob detection' and opencv
if you want to do it yourself, my 2 cents are:
First of all, specify what you want to do. there are really two different things:
calculate the difference between two images (I assume that they are the same sizes)
draw a square around the “areas” that are “different” as measured by 1. The questions here are 'area' and what is considered to be “different”.
my suggestion for each step:
(my guess is both grayscale images. If not, calculate the sum of the colors for each pixel to get a gray value)
1) cycle through all the pixels in both images and subtract them. set a threshold for the absolute difference to determine if there is enough difference to represent and actually change the scene (as opposed to sensor noise, etc., if the images are taken from the camera). then save the result in the third image. 0 no difference. 255 for a difference. if done correctly, it should be REALLY fast. however, in C # you should use pointers to get decent performance. here is an example of how to do this (note: the code is not verified !!):
/// <summary> /// computes difference between two images and stores result in a third image /// input images must be of same dimension and colour depth /// </summary> /// <param name="imageA">first image</param> /// <param name="imageB">second image</param> /// <param name="imageDiff">output 0 if same, 255 if different</param> /// <param name="width">width of images</param> /// <param name="height">height of images</param> /// <param name="channels">number of colour channels for the input images</param> unsafe void ComputeDiffernece(byte[] imageA, byte[] imageB, byte[] imageDiff, int width, int height, int channels, int threshold) { int ch = channels; fixed (byte* piA = imageB, piB = imageB, piD = imageDiff) { if (ch > 1) // this a colour image (assuming for RGB ch == 3 and RGBA == 4) { for (int r = 0; r < height; r++) { byte* pA = piA + r * width * ch; byte* pB = piB + r * width * ch; byte* pD = piD + r * width; //this has only one channels! for (int c = 0; c < width; c++) { //assuming three colour channels. if channels is larger ignore extra (as it likely alpha) int LA = pA[c * ch] + pA[c * ch + 1] + pA[c * ch + 2]; int LB = pB[c * ch] + pB[c * ch + 1] + pB[c * ch + 2]; if (Math.Abs(LA - LB) > threshold) { pD[c] = 255; } else { pD[c] = 0; } } } } else //single grey scale channels { for (int r = 0; r < height; r++) { byte* pA = piA + r * width; byte* pB = piB + r * width; byte* pD = piD + r * width; //this has only one channels! for (int c = 0; c < width; c++) { if (Math.Abs(pA[c] - pB[c]) > threshold) { pD[c] = 255; } else { pD[c] = 0; } } } } } }
2)
not sure what you mean by area here. several solutions depending on what you mean. from the simplest to the most complex.
a) the color of each pixel is the difference in your output
b) provided that you have only one area of difference (unlikely), calculate the bounding box of all 255 pixels in your output image. this can be done with a simple max / min for the x and y positions at all 255 pixels. one pass through the image and should be very fast.
c) if you have many different areas that are changing, calculate the “connected components”. it is a collection of pixels that are related to each other. of course, this only works in binary image (i.e. on or off, or 0 and 255, as in our case). you can implement this in c # and i have done this before. but I will not do it for you here. it is a bit related. Algorithms again opencv or google connected components .
after you have the CC list, draw a box around each. did.