I am currently working on a project on a screen with black and white and transparent pixels. (This is an open source project: http://code.google.com/p/super-osd , which shows a 256x192 pixel set / clear on-screen menu in development, but I transfer to a white / black / transparent screen.)
Since each pixel is black, white, or transparent, I can use a simple 2-bit / 4 state encoding, where I store a black-and-white selection and a transparent selection. So I would have a truth table (x = don't care):
B/W T
x 0 pixel is transparent
0 1 pixel is black
1 1 pixel is white
However, as can be clearly seen, this discards one bit when the pixel is transparent. I design for the limited memory of the microcontroller, so whenever I can save the memory, that’s good.
So, I'm trying to think about how to pack these 3 states into some large unit (say, bytes.) I am open to using lookup tables to decode and encode data, so you can use a complex algorithm, but it can not depend on pixel states before or after the current block / byte (this excludes any proper data compression algorithm), and the size should be consistent; that is, the scene with all the transparent pixels should be the same as the scene with random noise. I imagined something at the level of a densely packed decimal number that contains 3 x 4-bit (0-9) BCD numbers of only 10 bits with something like 24 states left without 1024, which is great. Does anyone have any ideas?
Any suggestions? Thank!