I played with Huffman compression on images to reduce size while maintaining the image without loss, but I also read that you can use smart encoding to further compress image data, reducing entropy.
From what I understand, in the lossless JPEG standard, each pixel is predicted as the weighted average of adjacent 4 pixels already encountered in raster order (three above and one to the left). for example, trying to predict the value of a pixel based on previous pixels, x, to the left, and also above a:
xxx xa
Then calculate and encode the residual (the difference between the predicted and the actual value).
But what am I not getting, if the average of 4 neighboring pixels is not a multiple of 4, would you be eligible for a share? Should this fraction be ignored? If so, the correct encoding of an 8-bit image (stored in byte[] ) should look something like this:
public static void Encode(byte[] buffer, int width, int height) { var tempBuff = new byte[buffer.Length]; for (int i = 0; i < buffer.Length; i++) { tempBuff[i] = buffer[i]; } for (int i = 1; i < height; i++) { for (int j = 1; j < width - 1; j++) { int offsetUp = ((i - 1) * width) + (j - 1); int offset = (i * width) + (j - 1); int a = tempBuff[offsetUp]; int b = tempBuff[offsetUp + 1]; int c = tempBuff[offsetUp + 2]; int d = tempBuff[offset]; int pixel = tempBuff[offset + 1]; var ave = (a + b + c + d) / 4; var val = (byte)(ave - pixel); buffer[offset + 1] = val; } } } public static void Decode(byte[] buffer, int width, int height) { for (int i = 1; i < height; i++) { for (int j = 1; j < width - 1; j++) { int offsetUp = ((i - 1) * width) + (j - 1); int offset = (i * width) + (j - 1); int a = buffer[offsetUp]; int b = buffer[offsetUp + 1]; int c = buffer[offsetUp + 2]; int d = buffer[offset]; int pixel = buffer[offset + 1]; var ave = (a + b + c + d) / 4; var val = (byte)(ave - pixel); buffer[offset + 1] = val; } } }
I donβt see how it really will reduce entropy? How does this help to compress my images further, without loss?
Thanks for any enlightenment.
EDIT:
So, after playing with predictive coding images, I noticed that the histogram data shows a lot of + -1 from different types of pixels. In some cases, this reduces entropy. Here is a screenshot:
