Create and transfer custom media stream to webrtc

I want to use the canvas element as the media source of the webrtc video ad source, any directions would be useful, browsing the network without finding many resources discussing this topic

* History of a long background *

Problem: I can’t send video from the camera directly, this is part of the requirements that I process the video (some image processing tools, out of sight for this problem) before displaying.

Previously, in another peer-to-peer browser, instead of directly displaying the video using the <video> , I did some processing on the hidden canvas element and then copied the details to another canvas (I used setimeout to continue drawing, which gave the illusion of live video ).

Now the client wants to perform processing before transferring the video , so I used webrtc to transfer the audio stream directly (previously both audio and video were sent via webrtc). For the video stream, I had two solutions:

Steps:

  • Process the video on a local peer, draw a hidden canvas. easy part.

  • Use a timeout to repeatedly capture image data and transfer data a) using websockets( yes, goes through server) , which came with a terrible lag and possible browser crash.
    b) using RTCDataChannel , which had much better performance, but sometimes failed for no reason. I also had a few other problems (e.g. used extra bandwidth due to sending jpeg instead of webp).

Another important issue is that since I use a timeout: when I switch tabs, the frame rate drops from the other side.

So, is there a way to use a hidden canvas as a media source, instead of doing it manually?

+5
source share
2 answers

mozCaptureStreamUntilEnded will be the basis for the proposal that Martin Thompson works for WG to connect directly to MediaStream. Firefox workaround in the comments here is mozCaptureStreamUntilEnded from feeding from a canvas captured from MediaStream. An ugly sequence, which is part of why we are going to allow direct output of a to MediaStream (as well as standardize captureStream).

Please note that the supply of mozCaptureStream (UntilEnded) to the PeerConnection has been disrupted for some time (in part, since it is still non-standard); it is fixed in Firefox 36 (expected on the release channel in 6 weeks, next week - Beta). See Bug 1097224 and Error 1081409

And the incredibly hacky approach to Chrome and Firefox puts the video in a window, and then a screencapture window. I do not recommend it, since this requires shielding permission, window selection, etc.

The only other option for Chrome (or Firefox) is to save the video frames in JPEG format (as you mention) and send via DataChannel. Effectively Motion-JPEG, but JS works. Quality and quality (and delay) will suffer. You might want to use an unreliable channel, because by mistake you can drop the frame and simply decode the next one (this is MJPEG in the end). Also, if the delay gets too high, reduce the frame size! You want to evaluate end-to-end delay; the best way is to return the decoding time over the data channels to the sender and use the receive time of this packet to estimate the delay. You care more about delay changes than absolute values!

+3
source

a probable solution was found, at least for firefox, it uses the canvas and captures its stream and passes it using canvas.captureStream ()

 // Find the canvas element to capture var canvasElt = document.getElementsByTagName("canvas")[0]; // Get the stream var stream = canvasElt.captureStream(25); // 25 FPS // Do things to the stream // Eg Sent it to another computer using a RTCPeerConnection // pc is a RTCPeerConnection created elsewhere pc.addStream(stream); 
0
source

Source: https://habr.com/ru/post/1210637/


All Articles