I want to use the canvas element as the media source of the webrtc video ad source, any directions would be useful, browsing the network without finding many resources discussing this topic
* History of a long background *
Problem: I canβt send video from the camera directly, this is part of the requirements that I process the video (some image processing tools, out of sight for this problem) before displaying.
Previously, in another peer-to-peer browser, instead of directly displaying the video using the <video> , I did some processing on the hidden canvas element and then copied the details to another canvas (I used setimeout to continue drawing, which gave the illusion of live video ).
Now the client wants to perform processing before transferring the video , so I used webrtc to transfer the audio stream directly (previously both audio and video were sent via webrtc). For the video stream, I had two solutions:
Steps:
Process the video on a local peer, draw a hidden canvas. easy part.
Use a timeout to repeatedly capture image data and transfer data a) using websockets( yes, goes through server) , which came with a terrible lag and possible browser crash.
b) using RTCDataChannel , which had much better performance, but sometimes failed for no reason. I also had a few other problems (e.g. used extra bandwidth due to sending jpeg instead of webp).
Another important issue is that since I use a timeout: when I switch tabs, the frame rate drops from the other side.
So, is there a way to use a hidden canvas as a media source, instead of doing it manually?
source share