There are two sides to my application, on the one hand I use C ++ to read frames from the camera using the Pleora EBUS SDK. When this stream is first received, before I convert the buffer to an image, I can read the stream 16 bits at a time to do some calculations for each pixel, i.e. There is a 16-bit piece of data for each pixel.
Now the second half is the Django web application, where I also presented this video output, this time through the ffmpeg, nginx, hls stream. When the user clicks on the video, I want to be able to take the current frame and the coordinates of their click and perform the same calculation as me, above in the C ++ part.
Right now I am using html5 canvas to capture the frame, and I am using canvas.toDataURL()to convert the frame to a base64 encoded image, then transfer the base64 image, frame coordinates and dimensions to python via AJAX.
In python, I am trying to manipulate this base64 encoded way in such a way as to get 16 bits per pixel. At the moment, I am doing the following:
pos = json.loads(request.GET['pos'])
str_frame = json.loads(request.GET['frame'])
dimensions = json.loads(request.GET['dimensions'])
pixel_index = (dimensions['width'] * pos['y']) + pos['x'] + 1
b64decoded_frame = base64.b64decode(str_frame.encode('utf-8'))
However, the b64decoded_framenumber of indices is much smaller, then there are pixels in the image, and the integer values ββare not as high as expected. I checked and the image is not damaged as I can save it as png.
To summarize, how to convert a base64 image to a serialized binary stream, where each pixel is represented by 16 bits.
UPDATE
I forgot to mention that I am using python3.2
, , , mono16 . , , , - , mono16 mono16, , .