Can I transfer microphone sound from client to client using nodejs?

I am trying to create a live chat. as soon as the client holds the button and speaks, I want the sound to be transmitted over the socket to the nodejs server, then I want to transfer this data to another client.

here is the sender client code:

socket.on('connect', function() { var session = { audio: true, video: false }; navigator.getUserMedia(session, function(stream){ var audioInput = context.createMediaStreamSource(stream); var bufferSize = 2048; recorder = context.createScriptProcessor(bufferSize, 1, 1); recorder.onaudioprocess = onAudio; audioInput.connect(recorder); recorder.connect(context.destination); },function(e){ }); function onAudio(e) { if(!broadcast) return; var mic = e.inputBuffer.getChannelData(0); var converted = convertFloat32ToInt16(mic); socket.emit('broadcast', converted); } }); 

Then the server receives this buffer and passes it to another client (in this example, the same client)

Server code

 socket.on('broadcast', function(buffer) { socket.emit('broadcast', new Int16Array(buffer)); }); 

And then, to play the sound from the other side (receiver), the client code looks like this:

 socket.on('broadcast', function(raw) { var buffer = convertInt16ToFloat32(raw); var src = context.createBufferSource(); var audioBuffer = context.createBuffer(1, buffer.byteLength, context.sampleRate); audioBuffer.getChannelData(0).set(buffer); src.buffer = audioBuffer; src.connect(context.destination); src.start(0); }); 

My expected result is that the sound from client A will be heard in client B, I can see the buffer on the server, I see the buffer back in the client, but I don’t hear anything.

I know that socket.io 1.x supports binary data, but I can not find any example of voice chat, I also tried using BinaryJS, but the results are the same, I also know that using WebRTC is a simple task, but I I don’t want to use WebRTC, can someone point me to a good resource or tell me what I am missing?

+6
source share
1 answer

I create something like this on my own a few weeks ago. Problems I encountered (you at some point):

  • To a large amount of data without reducing bitrate and sampling (via the Internet).
  • poor sound quality without interpolation or better sound compression
  • Even if they don’t show you, you will get different samples from different computer sound cards (my computer = 48 kHz, my laptop = 32 kHz), which means that you need to write a resampler
  • Websocket is TCP: each audio packet will achieve its goal, but if you have poor conditions, you have multi-user packets at the same time or in a different order. (U should use UDP proxy on each client if you want UDP)
  • In WebRTC, they reduce sound quality if a bad internet connection is detected. You cannot do this because it is low-level material!
  • You need to quickly implement this, because JS will block your front if not> use webmasters
  • The audio code translated into JS is very slow and you will get unexpected results (see one question from audiocodex from me: here ) I tried Opus but there are no good results so far.

I am not currently working on this project, but you can get the code: https://github.com/cracker0dks/nodeJsVoip

and a working example: (link removed) for multi-user voip audio. (Doesn’t work anymore! Websocketserver doesn’t work!) If you go to settings> audio (on the page), you can select a higher bit and sample for better sound recording.

EDIT: Can you tell me why you don't want to use WebRTC?

+7
source

Source: https://habr.com/ru/post/989448/


All Articles