Being a Good Citizen

Beyond being an audio transmission protocol or platform, Audiobus is a community of applications.

The experience that users have is strongly dependent on how well these apps work together. So, these are a set of rules/guidelines that your app should follow, in order to be a good Audiobus citizen.

Receivers, Use Audio Timestamps

When dealing with multiple effect pipelines, latency is an unavoidable factor that is very important to address when timing is important.

Audiobus deals with latency by providing you, the developer, with timestamps that correspond to the creation time of each block of audio.

If you are recording audio, and are mixing it with other live signals or if timing is otherwise important, then it is vital that you make full use of these timestamps in order to compensate for system latency. How you use these timestamps depends on your app - you may already be using timestamps from Core Audio, which means there's nothing special that you need to do.

See Dealing with Latency for more info.

Use Low IO Buffer Durations, If You Can

Core Audio allows apps to set a preferred IO buffer duration via the audio session (see AVAudioSession's preferredIOBufferDuration property in the Core Audio documentation). This setting configures the length of the buffers the audio system manages. Shorter buffers mean lower latency. By the time you receive a 5ms buffer from the system input, for example, roughly 5ms have elapsed since the audio reached the microphone. Similarly, by the time a 5ms buffer has been played by the system's speaker, 5ms or so have elapsed since the audio was generated.

The tradeoff of small IO buffer durations is that your app has to work harder, per time unit, as it's processing smaller blocks of audio, more frequently. So, it's up to you to figure out how low your app's latency can go - but remember to save some CPU cycles for other apps as well!

In the Background Suspend When Possible, But Not While Audiobus Is Running

It's up to you whether it's appropriate to suspend your app in the background, but there are a few things to keep in mind.

Most important: you should never, ever suspend your app if it's connected via Audiobus. You can tell whether your app's connected at any time via the connected property of the Audiobus controller. If the value is YES, then you mustn't suspend.

Secondly, we strongly recommend that your app remain active in the background while the Audiobus app is running. This keeps your app available for being re-added to a connection graph (or reloaded from a preset) without needing to be manually launched again. Once the Audiobus app closes, then your app can suspend in the background.

See the Lifecycle section of the integration guide, or the associated recipe for further details.

Note that during development, if your app has not yet been registered with Audiobus, Audiobus will not be able to see the app if it is not actively running in the background. Consequently, we strongly recommend that you register your app at the beginning of development.

Be Efficient!

Audiobus leans heavily on iOS multitasking! You could be running three synth apps, two filter apps, and be recording into a live-looper or a DAW. That requires a lot of juice.

So, be kind to your fellow developers. Profile your app and find places where you can back off the CPU a bit. Never, ever wait on locks, allocate memory, or call Objective-C functions from Core Audio. Use plain old C in time-critical places (or even drop to assembly). Take a look at the Accelerate framework if you're not familiar with it, and use its vector operations instead of scalar operations within loops - it makes a huge difference.