Well, the image data that you return from the canvas object using "getImageData" is just RGBA pixel information, that is, red, green, blue and alpha transparency. This way you can get the image data and just iterate over it, looking at four pixels at a time. When you see white, you can simply zero it out (along with the alpha value).
Now the fact is that you will not be satisfied with the results, because there will still be a βhaloβ around non-white elements. The original image is (possibly) slightly blurry, smoothes efficiently, along the edges of the colored areas. Thus, at the edges there are pixels that are slightly lighter than the main image, and you will see them even after removing all the white ones.
To really clear the edges is quite difficult, depending on what kind of source images you have. I do not think that this is advanced image processing or something else, but it is not trivial.
source share