Being a Good Citizen

Beyond being an audio transmission protocol or platform, Audiobus is a community of applications.

The experience that users have is strongly dependent on how well these apps work together. So, these are a set of rules/guidelines that your app should follow, in order to be a good Audiobus citizen.

Receivers, Use Audio Timestamps

When dealing with multiple effect pipelines, latency is an unavoidable factor that is very important to address when timing is important.

Audiobus deals with latency by providing you, the developer, with timestamps that correspond to the creation time of each block of audio.

If you are recording audio, and are mixing it with other live signals or if timing is otherwise important, then it is vital that you make full use of these timestamps in order to compensate for system latency. How you use these timestamps depends on your app - you may already be using timestamps from Core Audio, which means there's nothing special that you need to do.

See Dealing with Latency for more info.

Support Low IO Buffer Durations, If You Can

Core Audio allows apps to set a preferred IO buffer duration via the audio session (see AVAudioSession's preferredIOBufferDuration property in the Core Audio documentation). This setting configures the length of the buffers the audio system manages. Shorter buffers mean lower latency. By the time you receive a 5ms buffer from the system input, for example, roughly 5ms have elapsed since the audio reached the microphone. Similarly, by the time a 5ms buffer has been played by the system's speaker, 5ms or so have elapsed since the audio was generated.

The tradeoff of small IO buffer durations is that your app has to work harder, per time unit, as it's processing smaller blocks of audio, more frequently. So, it's up to you to figure out how low your app's latency can go - but remember to save some CPU cycles for other apps as well!

Be Efficient!

Audiobus leans heavily on iOS multitasking! You could be running three synth apps, two filter apps, and be recording into a live-looper or a DAW. That requires a lot of juice.

So, be kind to your fellow developers. Profile your app and find places where you can back off the CPU a bit. Never, ever wait on locks, allocate memory, or call Objective-C functions from Core Audio. Use plain old C in time-critical places (or even drop to assembly). Take a look at the Accelerate framework if you're not familiar with it, and use its vector operations instead of scalar operations within loops - it makes a huge difference.