Receiving: The Audiobus Receiver Port

The Audiobus receiver port class ABAudioReceiverPort provides an interface for receiving audio, either as separate audio streams (one per connected sender), or as a single audio stream with all sources mixed together.

Receiving audio tends to be a little more involved than sending or filtering audio, so this section aims to discuss some of the finer points of using ABAudioReceiverPort.

See the Receiver Port section of the integration guide for an initial overview.

Dealing with Latency

Audiobus receivers are given timestamps along with every piece of audio they receive. These timestamps are vital for compensating for latency when recording in a time-sensitive context.

This works in exactly the same way that timestamps in Core Audio do.

If your app records the audio it receives over Audiobus and the timing is important (for example, you record audio in time with some other track, such as a looper or a multi-track recorder), then use these timestamps when saving the received audio to negate the effects of latency.

If your app already records from the microphone, then you are probably already using the AudioTimeStamp values given to you by Core Audio, in order to compensate for audio hardware latency. If this is the case, then there's probably nothing more you need to do, other than making sure this mechanism is using the timestamps generated by Audiobus.

For example, a looper app might record audio while other loops are playing. The audio must be recorded in time so that the beats in the new recording match the beats in the already-playing loop tracks. If such an app has a time base (such as the time the app was started) which is used to determine the playback position of the loops, then this same time base can be used with the timestamps from the incoming audio in order to determine when the newly-recorded track should be played back.

Note that for system audio inputs, Audiobus already compensates for the reported hardware input latency, so you should not further modify the timestamp returned from ABAudioReceiverPortReceive.

Receiving Separate Streams

You can receive audio as separate stereo streams - one per source - or as a single mixed stereo audio stream. By default, Audiobus will return the audio as a single, mixed stream.

The behavior for receiving separate streams has been changed in Audiobus 3. In Audiobus 2 a separate stream for each source connected to a pipeline was received. Audiobus 3 merges all sources in the input slot of a pipeline together into one stream. So if users want to record several sources in separate streams they need to add them to different pipelines in Audiobus 3.

If you wish to receive separate streams for each pipeline, however, you can set receiveMixedAudio to NO. Then, each pipeline will have its own audio stream, accessed by passing in a pointer to the source port in ABAudioReceiverPortReceive .

After calling ABAudioReceiverPortReceive for each source, you must then call ABAudioReceiverPortEndReceiveTimeInterval to mark the end of the current interval.

Please see the 'Receive Audio as Separate Streams' sample recipe, the documentation for ABAudioReceiverPortReceive and ABAudioReceiverPortEndReceiveTimeInterval, and the AB Multitrack sample app for more info.

Note you should not access the sources property, or any other Objective-C methods, from a Core Audio thread, as this may cause the thread to block, resulting in audio glitches. You should obtain a pointer to the ABPort objects in advance, and use these pointers directly, as demonstrated in the 'Receive Audio as Separate Streams' sample recipe and within the "AB Multitrack Receiver" sample.

Receiving Separate Streams Alongside Core Audio Input

If you wish to simultaneously incorporate audio from other sources as well as Audiobus - namely, the device's audio input - then depending on your app, it may be very important that all sources are synchronised and delivered in a consistent fashion. This will be true if you provide live audio monitoring, or if you apply effects in a synchronised way across all audio streams.

The Audiobus SDK provides the ABMultiStreamBuffer class for buffering and synchronising multiple audio streams, so that you can do this. You enqueue separate, un-synchronised audio streams on one side, and then dequeue synchronised streams from the other side, ready for further processing.

Typical usage is as follows:

  1. You receive audio from the system audio input, typically via a Remote IO input callback and AudioUnitRender, then enqueue it on the ABMultiStreamBuffer.
  2. You receive audio from each connected Audiobus source, also enqueuing the audio on the ABMultiStreamBuffer (ABMultiStreamBufferEnqueue).
  3. You then dequeue each source from ABMultiStreamBuffer (ABMultiStreamBufferDequeueSingleSource). Audio will be buffered and synchronised via the timestamps of the enqueued audio.

Differences between Audiobus 2 and Audiobus 3

There are some important differences between Audiobus 2 and Audiobus 3 once receiver ports come into play. The following table lists the most important differences:

  Audiobus 2 Audiobus 3
Hosting inputs and filters The app in the output Audiobus itself.
Receiving multiple streams One stream per source One stream per pipeline
Assigning streams to tracks Unique ID of the source Pipeline ID

All of the proposed changes below are backward compatible with Audiobus 2. So don't worry about breaking Audiobus 2 compatibility by implementing it.

Intermediate Routings

Audiobus 3 introduces so called intermediate routings. Imagine the following input-filter-output connection chain:

Animoog -> Bias -> Cubasis

In Audiobus 2 Cubasis would host and connect Animoog and Bias. Because only hosts can launch other apps in background we needed to make Audiobus 3 hosting Animoog and Bias. We did this by inserting a so called intermediate routing just before the output. Thus the connection graph internally looks like this:

Animoog -> Bias -> ABIRIn - ABIROut -> Cubasis
  • ABIRIn is an audio receiver port within Audiobus.
  • ABIROut is an audio sender port within Audiobus.
  • The chain ABIRIn - ABIROut is the intermediate routing.
  • Instead of being directly connected to Cubasis, Bias is now connected to ABIRIn and therefore hosted by Audiobus.
  • Cubasis is connected to ABIROut. Thus instead of hosting Animoog and Bias Cubasis only hosts Audiobus' sender port ABIROut.

Internally Audiobus manages a set of sixteen intermediate routings which are dynamically assigned.

Access sources connected to audio receiver ports

Due to the introduction of the intermediate routing between Cubasis and Bias Cubasis is not able access the name of connected sources by iterating the sources connected to an audio receiver port.

Much more you need now to read the property ABPort::sourcesRecursive. Instead of returning the physically connected source which is Audiobus' intermediate sender port, this property will return the logically connected sources which are Animoog and Bias. Additionally ABPort provides the selectors ABPort::sourcesIcon and ABPort::sourcesTitle.

Multitrack Audio Recorders: Assigning sources to tracks

In Audiobus 2 multitrack records were able to record one track per source. To reassign a source to the right track, the unique ID of the source could be used. Because of the introduction of dynamic intermediate routings this is not possible anymore.

To solve this issue, Audiobus 3 introduces a new ABPort property called ABPort::pipelineIDs. This property returns an array containing the IDs of all pipelines the port is belonging too. Audio sender and audio filter ports can only be assigned to one pipeline. So by reading the first pipeline ID of a source connected to your audio receiver port you can estimate to which track this source belongs.

The pipeline ID of a source is stored within Audiobus preset. So you can make sure that after loading a presets all assignements can be restored.