Problem:
Well, the problem is that you get player.lastRenderTime in every run of the for loop before playAt:
So, you actually get a different now -time for each player!
How do you do this, you can run all the players in the loop using play: or playAtTime: nil !!! You will get the same result with loss of synchronization ...
For the same reason, your player does not work differently on different devices, depending on the speed of the machine ;-). these are random magic numbers now, so don’t assume that they will always work if they just work in your script. Even the smallest delay due to a busy cycle or processor will put you out of sync again ...
Decision:
What you really need to do is take ONE discrete snapshot now = player.lastRenderTime before the loop and use the same anchor to get a batch synchronized launch for your entire player.
Thus, you do not even need to delay the start of the player. Admittedly, the system will remove some of the leading frames - (but, of course, the same amount for each player ;-) - to compensate for the difference between your recently installed now (which is actually already in the past and has passed) and the actual playTime (which is still to come in the very near future), but will ultimately launch your entire player exactly in sync, as if you really started them on now in the past. These cropped frames are almost never noticeable, and you will have peace of mind regarding responsiveness ...
If you need these frames - due to audible clicks or artifacts when starting a file / segment / buffer - well, move now to the future by starting the player. But, of course, you get this slight lag after clicking the Start button - although, of course, it’s still in perfect sync ...
Conclusion:
Here you need to specify a single now -time link for all players and call the playAtTime: now methods as soon as possible after capturing this Now -reference. The larger the gap, the larger the portion of the cropped leading frames will be - if you do not provide a reasonable start delay and do not add it now -time, which, of course, causes immunity in the form of a delayed start after pressing the start button.
And always remember that - no matter what delay on any device is created by sound buffering mechanisms - DOES NOT affect the synchronism of any number of players, if done correctly, the above method! It does NOT delay your sound! Just a window that actually lets you hear your sound open at a later point in time ...
Note:
- If you decide the delayed (super-responsive) version of the launch and for some reason there is a big delay (between capture now and the actual start of your player), you will fix the large leading part (up to ~ 300 ms / 0.3 sec) of your audio . This means that when you start your player, it will start immediately but not from the position you recently paused, but rather (up to ~ 300 ms) later in your audio. Thus, the acoustic perception is this: pause-play cuts out part of your sound on the go, although everything is perfectly synchronized.
- Like the initial delay that you provide in playAtTime: now + myProvidedDelay the method call is a fixed constant value (which fails to dynamically adjust to account for buffering delays or other changing parameters during heavy system load) even for the Delayed Option parameter with a delay time provided less than approximately 300 ms, may cause clipping of the leading audio samples if the device-dependent preparation time exceeds the delay time you set.
- The maximum amount of clipping (by design) does not increase these ~ 300 ms. To get proof, just forcibly control (selectively accurate) cropping from leading frames, for example. adding a negative delay time in now and you will perceive the growing cropped audio part, increasing this negative value. Each negative value greater than ~ 300 ms is corrected to ~ 300 ms. Thus, the provided negative delay of 30 seconds will lead to the same behavior as the negative value of 10, 6, 3 or 1 seconds, and, of course, also including negative 0.8, 0.5 seconds down to ~ 0.3
These examples serve well for demonstration purposes, but negative delay values should not be used in production code.
ATTENTION:
The most important thing in setting up multiple players is to sync your player.pause . Since June 2016, AVAudioPlayerNode still has no synchronized exit strategy.
Just a little search for a method or output something to the console between two calls to player.pause can cause the latter to execute one or more frames / samples later than the first. Thus, your player will not actually stop at the same relative position in time. And, above all, different devices will give different behavior ...
If you start them now in the aforementioned (synchronized) manner, then these current positions of the current player in your last pause will definitely be forcibly synchronized with your new now- position on each playAtTime: - which essentially means that you are distributing the lost sample / frame to future with each new beginning of your player. This, of course, stacks up with each new start / pause cycle and widens the gap. Do it fifty or a hundred times, and you will already get a good delay effect without using an effect-audio unit; -)
Since we do not have (according to the provided system) control over this factor, the only way is to place all the calls in player.pause directly one after another in a tight sequence without anything in between, as you can see in the examples below. Do not throw them in a loop or something like that - this will guarantee that you will complete the synchronization at the next pause / start of your player ...
Keeping these calls together is a 100% perfect solution, or a run-loop with any large CPU load can accidentally intervene and forcefully separate pause calls from each other and cause frame drops - I don't know - for at least In the few weeks associated with the AVAudioNode API, I was in no way able to get my multiplayer to get out of sync - however, I still don't feel very comfortable or safe with this unsynchronized random magic -number pause solution ...
Sample code and alternative:
If your engine is already running, you got @property lastRenderTime in AVAudioNode - your player's superclass - this is your ticket for 100% selective frame exact synchronization ...
AVAudioFormat *outputFormat = [playerA outputFormatForBus:0]; const float kStartDelayTime = 0.0;
By the way, you can achieve the same 100% result with an exemplary frame using the AVAudioPlayer / AVAudioRecorder classes ...
NSTimeInterval startDelayTime = 0.0;
Without startDelayTime, the first 100-200 ms of all players will be cut off, because the launch team actually takes its time in the launch cycle, although the players have already started (well, were planned) 100% in synchronization for Now . But with startDelayTime = 0.25 you are good to go. And never forget to prepare ToPlay your players in advance, so that during the start there is no additional buffering or tuning - just start their guys; -)