Play a WAV file stream by stream over a network connection in iOS

I work with a third-party API, which behaves as follows:

  • I need to connect to its URL and execute my request, which includes the POSTing request data;
  • the remote server then sends back the “chunk” at a time, the corresponding WAV data (which I receive in my NSURLConnectionDataDelegate didReceiveData callback).

By “chunk” for the sake of argument, we mean some arbitrary “next batch” of data, without a guarantee that it corresponds to any significant separation of the audio (for example, it cannot be aligned with a specific multiple audio frames, the number of bytes in each fragment is just arbitrary a number that may be different for each fragment, etc.).

Now ... correct me, if I am mistaken, I can’t just use AVAudioPlayer, because I need to send POST to my URL, so I need to drop the data “manually” via NSURLConnection.

So ... considering the foregoing, what is the most painless way for me to reproduce this sound when it goes by wire? (I appreciate that I could combine all the arrays of bytes, and then transfer all this to AVAudioPlayer at the end - only this will delay the start of playback, since I have to wait for all the data.)

+6
source share
1 answer

I will give the bird a look at the solution. I think this will help you in your decision to find a specific encoded solution.

iOS provides the zoo with an audio API, and some of them can be used to play sound. Which one you choose depends on your specific requirements. As you already wrote, the AVAudioPlayer class AVAudioPlayer not suitable for your case, because with this you should know all the audio data at the moment you start playing sound. Obviously this is not the case for streaming, so we should look for an alternative.

A good compromise between ease of use and versatility is the Audio Queue Service , which I recommend to you. Another alternative would be Audio Units, but they are low-level APIs and therefore less intuitive to use, and they have many pitfalls. Therefore, stick to sound lines.

Sound queues allow you to define the callback functions that are called from the API when it needs more audio data to play — similar to the callback of your network code, which is called when data is available.

Currently, the difficulty lies in how to connect two callbacks, one of which delivers data, and one that requests data. For this you need to use a buffer. More specifically, a queue (do not confuse this queue with Queue Services audio things). Audio Queue Services is the name of the API. On the other hand, I’m talking about the following - this is a container object). For clarity, I will call this buffer queue.

To populate the data in the buffer queue, you will use the network callback function, which delivers data from the network to you. And the data will be pulled out of the buffer queue using the audio callback function, which is called by the audio queue services when it needs more data.

You should find an implementation of a buffer queue that supports concurrent access (it is also thread safe), because two different streams will be available to it: an audio stream and a network stream. As an alternative to finding an existing safe-stream buffering implementation, you can take care of thread-safety yourself. by executing all the code associated with the buffer queue on a specific send queue (the third kind of queue is here, yes, Apple and IT love them) .

Now what happens if either

  • An audio callback is called and your buffer queue is empty or

  • Is a network callback called and is your buffer queue already full?

In both cases, the corresponding callback function cannot function normally. The audio callback function cannot provide audio data if it is not there, and the network callback function cannot store incoming data if the buffer queue is full.

In these cases, I will first try to block further execution until more data is available or, accordingly, the space is available for data storage. On the network side, this is likely to work. On the audio side, this can cause problems. If this causes problems on the audio side, you have a simple solution: if you don't have data, just be silent as data. This means that you need to provide zero frames for the Audio Queue Services, which will play as silence in order to fill the gap until more data is available on the network. This is the concept that all streaming players use when the sound suddenly stops and it tells you to “buffer” next to some kind of spinning icon indicating that you need to wait and no one knows how long.

+5
source

Source: https://habr.com/ru/post/971578/


All Articles