I am trying to stream video to a webcam to other clients in real time, but I am having some problems when the viewer starts to look in the middle.
To this end, I get a webcam stream using getUserMedia (and all its siblings).
Then, when the button is clicked, I start recording the stream and send each segment / fragment / whatever you call it to the broadcast webcam firewall:
var mediaRecorder = new MediaRecorder(stream); mediaRecorder.start(1000); mediaRecorder.ondataavailable = function (event) { uploadVideoSegment(event);
Server-side (web API using Microsoft.Web.WebSockets) I get bytes [], which is great.
Then I send the byte [] to the Viewers that are currently connected to Broadcaster, read it in the socket's onmessage event using FileReader, and add Uint8Array to the sourceBuffer from MediaSource, which is an src HTML5 video element.
When viewers receive byte [] from the very beginning, in particular, the first 126 bytes that begin with EBMLHeader (0x1A45DFA3) and end with the beginning of the cluster (0x1F43B675), and then the entire amount of media - this played fine.
The problem occurs when a new viewer joins in the middle and selects the second fragment later.
I try to research and mess a little with my hands in several ways. I understand that a headline is needed ( http://www.slideshare.net/mganeko/media-recorder-and-webm ), that there are some things regarding key frames and all this, but I was very confused.
So far I have been trying to write my own simple web parser in C # (from the node.js project link in github - https://github.com/mganeko/wmls ). So I split the header from the first block, pinned it, and tried to send it with each fragment later. Of course, this did not work.
I think that maybe MediaRecorder breaks the cluster in the middle when the ondataavailable event is fired (this is because I noticed that the beginning of the second part does not start with the cluster header).
At this moment, I was stuck, not knowing how to use the parser to make it work.
Then I read about using ffmpeg to convert a web stream. Each frame is also a key frame — FFMPEG encoding in MPEG-DASH — or WebM with keyframe clusters — for the MediaSource API (in Chris Noleth’s answer).
I tried using FFMpegConverter (for .Net) using:
var conv = new FFMpegConverter(); var outputStream = new MemoryStream(); var liveMedia = conv.ConvertLiveMedia("webm", outputStream, "webm", new ConvertSettings { VideoCodec = "vp8", CustomOutputArgs = "-g 1" }); liveMedia.Start(); liveMedia.Write(vs.RawByteArr, 0, vs.RawByteArr.Length);
I am not familiar with FFMPEG, so probably I am not entering the parameters correctly, although in the answer I saw, but they kind of wrote it very soon.
Of course, I ran into a lot of problems: When using web maps, starting FFMpegConverter simply forcibly closes the websockets channel. (I will be glad if anyone can explain why).
I didn’t give up, I wrote everything without websites, using HttpGet (to extract a segment from the server) and HttpPost (with multi-page blobs and all after-party ones to send recorded fragments) and tried to use FFMpegConverter, as described above.
For the first segment, it worked, but it deduced byte [] with half the length of the original (I will be glad if someone also explains this), and for other pieces, he threw an exception (each time, only once), saying that the pipe was finished.
I'm lost.
Please help me, anyone. The main 4 questions:
How can I play the pieces that follow the first fragment of MediaRecorder? (Meanwhile, I just fire up the closing / completion of the sourcebuffer events, and the sourceBuffer is disconnected from the parent MediaSource (throwing an exception such as "sourceBuffer was removed from its parent") due to the byte [] being transferred is not good. Maybe I don’t use the webm parser. I wrote correctly to find important parts in the second fragment (which, by the way, does not start with the cluster - why did I write that it seems that MediaRecorder cuts the cluster in the middle))
Why does FFMpeg cause WebSockets to close?
I use FFMpegConverter.ConvertLiveMedia with the correct parameters to get a new segment of the website with all the information he needs, to get it as a separate block, regardless of the previous fragments (as Chris Nolet said in his answer in the SO link above)?
Why is FFMpegConverter throwing a "run out" exception?
Any help would be greatly appreciated.