I wrote an OpenGL game and I want to enable remote play of the game through the canvas element. The input is simple, but the video is complicated.
What I'm doing right now is launching the game through node.js, and in my rendering cycle I send to stdout a base64 encoded stream of raster data representing the current frame. The base64 frame is sent via websocket to the client page and rendered (painstakingly slow) in pixels. Obviously, this does not hold.
I'm starting to think about trying to create a video stream, and then I can easily display it on the canvas using the tag (ala http://mrdoob.github.com/three.js/examples/materials_video.html ).
The problem with this idea is that I donβt know enough about codecs / streams to define at a high level, if this is really possible? Iβm not sure that even the codec is the part in which I need to worry about the possibility of dynamically changing content and, possibly, a few frames ahead.
Other ideas that I had:
- Trying to create an HTMLImageElement element from a base64 frame
- Trying to optimize the compression / redrawing areas so that the pixel throughput is much lower (it seems unrealistic to achieve such performance that I need to get 20 + fps).
Then there is always the opportunity to switch to flash ... but I would prefer to avoid this. I'm looking for some opinions about the technologies that need to be implemented, ideas?
source share