Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

AudioQueueNewOutput blocks indefinitely on iOS 18.3 (hangs during creation)
Hi everyone, We’re encountering an issue where AudioQueueNewOutput blocks indefinitely and never returns, and we’re hoping to get some insight or confirmation if this is a known behavior/regression on newer iOS versions. Issue Description When triggering audio playback, we create an output AudioQueue using AudioQueueNewOutput. On some devices, the call hangs inside AudioQueueNewOutput and never returns, with no OSStatus error and no subsequent logs. This behavior is reproducible mainly on iOS 18.3. Earlier iOS versions do not show this issue under the same code path. if (audioDes) { mAudioDes.mSampleRate = audioDes->mSampleRate; mAudioDes.mBitsPerChannel = audioDes->mBitsPerChannel; mAudioDes.mChannelsPerFrame = audioDes->mChannelsPerFrame; mAudioDes.mFormatID = audioDes->mFormatID; mAudioDes.mFormatFlags = audioDes->mFormatFlags; mAudioDes.mFramesPerPacket = audioDes->mFramesPerPacket; mAudioDes.mBytesPerFrame = audioDes->mBytesPerFrame; mAudioDes.mBytesPerPacket = audioDes->mBytesPerFrame; mAudioDes.mReserved = 0; } // Create AudioQueue for output OSStatus status = AudioQueueNewOutput( &mAudioDes, AQOutputCallback, this, NULL, NULL, 0, &audioQueue ); code-block The thread blocks inside AudioQueueNewOutput, and execution never reaches the next line. Additional Notes / Observations ASBD is confirmed to be valid Standard PCM output Sample rate, channels, bytes per frame/packet all consistent Same ASBD works correctly on earlier iOS versions AudioQueue is created on a background thread Not on the main thread Not inside the AudioQueue callback On first creation, AVAudioSession may not yet be active setCategory and setActive:YES may be called shortly before creating the AudioQueue There may be a timing window where the session is still activating Issue is reported mainly on iOS 18.3 Multiple user reports point to iOS 18.3 devices Same code path works on iOS 17.x and earlier No OSStatus error is returned — the call simply never returns. Questions Is it expected that AudioQueueNewOutput can block indefinitely while waiting for AVAudioSession / audio route / HAL readiness? Have there been any behavior changes in iOS 18.3 regarding AudioQueue creation or AudioSession synchronization? Is it unsafe to call AudioQueueNewOutput before AVAudioSession is fully active on recent iOS versions? Are there recommended patterns (or delays / callbacks) to ensure AudioQueue creation does not hang? Any insight or confirmation would be greatly appreciated. Thanks in advance!
0
0
60
1w
In Speech framework is SFTranscriptionSegment timing supposed to be off and speechRecognitionMetadata nil until isFinal?
I'm working in Swift/SwiftUI, running XCode 16.3 on macOS 15.4 and I've seen this when running in the iOS simulator and in a macOS app run from XCode. I've also seen this behaviour with 3 different audio files. Nothing in the documentation says that the speechRecognitionMetadata property on an SFSpeechRecognitionResult will be nil until isFinal, but that's the behaviour I'm seeing. I've stripped my class down to the following: private var isAuthed = false // I call this in a .task {} in my SwiftUI View public func requestSpeechRecognizerPermission() { SFSpeechRecognizer.requestAuthorization { authStatus in Task { self.isAuthed = authStatus == .authorized } } } public func transcribe(from url: URL) { guard isAuthed else { return } let locale = Locale(identifier: "en-US") let recognizer = SFSpeechRecognizer(locale: locale) let recognitionRequest = SFSpeechURLRecognitionRequest(url: url) // the behaviour occurs whether I set this to true or not, I recently set // it to true to see if it made a difference recognizer?.supportsOnDeviceRecognition = true recognitionRequest.shouldReportPartialResults = true recognitionRequest.addsPunctuation = true recognizer?.recognitionTask(with: recognitionRequest) { (result, error) in guard result != nil else { return } if result!.isFinal { //speechRecognitionMetadata is not nil } else { //speechRecognitionMetadata is nil } } } } Further, and this isn't documented either, the SFTranscriptionSegment values don't have correct timestamp and duration values until isFinal. The values aren't all zero, but they don't align with the timing in the audio and they change to accurate values when isFinal is true. The transcription otherwise "works", in that I get transcription text before isFinal and if I wait for isFinal the segments are correct and speechRecognitionMetadata is filled with values. The context here is I'm trying to generate a transcription that I can then highlight the spoken sections of as audio plays and I'm thinking I must be just trying to use the Speech framework in a way it does not work. I got my concept working if I pre-process the audio (i.e. run it through until isFinal and save the results I need to json), but being able to do even a rougher version of it 'on the fly' - which requires segments to have the right timestamp/duration before isFinal - is perhaps impossible?
1
0
165
Jul ’25
AVAudioEngine obtains channel audio data
Currently, I have successfully used ChannelMap to map hardware input channels and obtained audio data from the hardware device's MIC and OTG inputs. Additionally, I have used ChannelMap to map output channels to freely feed data for playback to each output channel. However, I now have a problem. I have a hardware device that only has output channels (no input channels), and the system has set this hardware device as the default playback device. In this case, how can I obtain the audio data being played to the output channels for modification?
0
0
282
Dec ’25
Destroy MIDIUMPMutableEndpoint again?
Is there a way to destroy MIDIUMPMutableEndpoint again? In my app, the user has a setting to enable and disable MIDI 2.0. If MIDI 2.0 should not be supported (or if iOS version < 18), it creates a virtual destination and a virtual source. And if MIDI 2.0 should be enabled, it instead creates a MIDIUMPMutableEndpoint, which itself creates the virtual destination and source automatically. So here is my problem: I didn't find any way to destroy the MIDIUMPMutableEndpoint again. There is a method to disable it (setEnabled:NO), but that doesn't destroy or hide the virtual destination and source. So when the user turns MIDI 2.0 support off, I will have two virtual destinations and sources, and cannot get rid of the 2.0 ones. What is the correct way to get rid of the MIDIUMPMutableEndpoint once it is created?
0
0
156
Sep ’25
watchOS longFormAudio cannot de active
My workout watch app supports audio playback during exercise sessions. When users carry both Apple Watch, iPhone, and AirPods, with AirPods connected to the iPhone, I want to route audio from Apple Watch to AirPods for playback. I've implemented this functionality using the following code. try? session.setCategory(.playback, mode: .default, policy: .longFormAudio, options: []) try await session.activate() When users are playing music on iPhone and trigger my code in the watch app, Apple Watch correctly guides users to select AirPods, pauses the iPhone's music, and plays my audio. However, when playback finishes and I end the session using the code below: try session.setActive(false, options:[.notifyOthersOnDeactivation]) the iPhone doesn't automatically resume the previously interrupted music playback—it requires manual intervention. Is this expected behavior, or am I missing other important steps in my code?
1
0
324
Nov ’25
How to synchronize the clock sources of two audio devices
I created a virtual audio device to capture system audio with a sample rate of 44.1 kHz. After capturing the audio, I forward it to the hardware sound card using AVAudioEngine, also with a sample rate of 44.1 kHz. However, due to the clock sources being unsynchronized, problems occur after a period of playback. How can I retrieve the clock source of the hardware device and set it for the virtual device?
2
0
305
May ’25
When to set AVAudioSession's preferredInput?
I want the audio session to always use the built-in microphone. However, when using the setPreferredInput() method like in this example private func enableBuiltInMic() { // Get the shared audio session. let session = AVAudioSession.sharedInstance() // Find the built-in microphone input. guard let availableInputs = session.availableInputs, let builtInMicInput = availableInputs.first(where: { $0.portType == .builtInMic }) else { print("The device must have a built-in microphone.") return } // Make the built-in microphone input the preferred input. do { try session.setPreferredInput(builtInMicInput) } catch { print("Unable to set the built-in mic as the preferred input.") } } and calling that function once in the initializer, the audio session still switches to the external microphone once one is plugged in. The session's preferredInput is nil again at that point, even if the built-in microphone is still listed in availableInputs. So, why is the preferredInput suddenly reset? when would be the appropriate time to set the preferredInput again? Observing the session’s availableInputs did not work and setting the preferredInput again in the routeChangeNotification handler seems a bad choice as it’s already a bit too late then.
1
0
874
Oct ’25
App Randomly Crashes During Continuous Sound Playback Using AVAudioPlayer
Environment→ ・Device: iPad 10th generation ・OS:**iOS18.3.2 We're using AVAudioPlayer to play a sound when a button is tapped. In our use case, this button can be tapped very frequently — roughly every 0.1 to 0.2 seconds. Each tap triggers the following function: var audioPlayer: AVAudioPlayer? func soundPlay(resource: String, type: String){ guard let path = Bundle.main.path(forResource: resource, ofType: type) else { return } do { audioPlayer = try AVAudioPlayer(contentsOf: URL(fileURLWithPath: path)) audioPlayer!.delegate = self try audioSession.setCategory(.playback) } catch { return } self.audioPlayer!.play() } The issue is that under high-frequency tapping (especially around 0.1–0.15s intervals), the app occasionally crashes. The crash does not occur every time, but it happens randomly — sometimes within 30 seconds, within 1 minute, or even 3 minutes of continuous tapping. Interestingly, adding a delay of 0.2 seconds between button taps seems to prevent the crash entirely. Delays shorter than 0.2 seconds (e.g.,0.15s,0.18s) still result in occasional crashes. My questions are: **Is this expected behavior from AVAudioPlayer or AVAudioSession? Could this be a known issue or a limitation in AVFoundation? Is there any documentation or guidance on handling frequent sound playback safely?** Any insights or recommendations on how to handle rapid, repeated audio playback more reliably would be appreciated.
0
0
216
May ’25
AVAudioEngine installTap stops working after phone call interruption on iPhone 16e
Environment Device: iPhone 16e iOS Version: 18.4.1 - 18.7.1 Framework: AVFoundation (AVAudioEngine) Problem Summary On iPhone 16e (iOS 18.4.1-18.7.1), the installTap callback stops being invoked after resuming from a phone call interruption. This issue is specific to phone call interruptions and does not occur on iPhone 14, iPhone SE 3, or earlier devices. Expected Behavior After a phone call interruption ends and audioEngine.start() is called, the previously installed tap should continue receiving audio buffers. Actual Behavior After resuming from phone call interruption: Tap callback is no longer invoked No audio data is captured No errors are thrown Engine appears to be running normally Note: Normal pause/resume (without phone call interruption) works correctly. Steps to Reproduce Start audio recording on iPhone 16e Receive or make a phone call (triggers AVAudioSession interruption) End the phone call Resume recording with audioEngine.start() Result: Tap callback is not invoked Tested devices: iPhone 16e (iOS 18.4.1-18.7.1): Issue reproduces ✗ iPhone 14 (iOS 18.x): Works correctly ✓ iPhone SE 3 (iOS 18.x): Works correctly ✓ Code Initial Setup (Works) let inputNode = audioEngine.inputNode inputNode.installTap(onBus: 0, bufferSize: 4096, format: nil) { buffer, time in self.processAudioBuffer(buffer, at: time) } audioEngine.prepare() try audioEngine.start() Interruption Handling NotificationCenter.default.addObserver( forName: AVAudioSession.interruptionNotification, object: AVAudioSession.sharedInstance(), queue: nil ) { notification in guard let userInfo = notification.userInfo, let typeValue = userInfo[AVAudioSessionInterruptionTypeKey] as? UInt, let type = AVAudioSession.InterruptionType(rawValue: typeValue) else { return } if type == .began { self.audioEngine.pause() } else if type == .ended { try? self.audioSession.setActive(true) try? self.audioEngine.start() // Tap callback doesn't work after this on iPhone 16e } } Workaround Full engine restart is required on iPhone 16e: func resumeAfterInterruption() { audioEngine.stop() inputNode.removeTap(onBus: 0) inputNode.installTap(onBus: 0, bufferSize: 4096, format: nil) { buffer, time in self.processAudioBuffer(buffer, at: time) } audioEngine.prepare() try audioSession.setActive(true) try audioEngine.start() } This works but adds latency and complexity compared to simple resume. Questions Is this expected behavior on iPhone 16e? What is the recommended way to handle phone call interruptions? Why does this only affect iPhone 16e and not iPhone 14 or SE 3? Any guidance would be appreciated!
0
0
198
Oct ’25
iOS - record audio fails to record
Hi, I try to record audio on the iPhone with the AVAudioRecorder and Xcode 26.0.1. Maybe the problem is that I can not record audio with the simulator. But there's a menu for audio. In the plist I added 'Privacy - Microphone Usage Description' and I ask for permission before recording. if await AVAudioApplication.requestRecordPermission() { print("permission granted") recordPermission = true } else { print("permission denied") } Permission is granted. let settings: [String : Any] = [ AVFormatIDKey: kAudioFormatMPEG4AAC, AVSampleRateKey: 12000, AVNumberOfChannelsKey: 1, AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue ] recorder = try AVAudioRecorder(url: filename, settings: settings) let prepared = recorder.prepareToRecord() print("prepared started: \(prepared)") let started = recorder.record() print("recording started: \(started)") started is always false and I tried many settings. Error messages AddInstanceForFactory: No factory registered for id <CFUUID 0x600000211480> F8BB1C28-BAE8-11D6-9C31-00039315CD46 AudioConverter.cpp:1052 Failed to create a new in process converter -> from 0 ch, 12000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to 1 ch, 12000 Hz, aac (0x00000000) 0 bits/channel, 0 bytes/packet, 1024 frames/packet, 0 bytes/frame, with status -50 AudioQueueObject.cpp:1892 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 12000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 12000 Hz, aac (0x00000000) 0 bits/channel, 0 bytes/packet, 1024 frames/packet, 0 bytes/frame prepared started: true AudioQueueObject.cpp:7581 ConvertInput: aq@0x10381be00: AudioConverterFillComplexBuffer returned -50, packetCount 5 recording started: false All examples I find are the same, but apparently there must be something different.
1
0
338
Oct ’25
Where is the License Agreement for Android version of ShazamKit?
I have integrated the ShazamKit SDK into my iOS app and would like to implement the same functionality in my Android app. My question is: Can I use the Android version of the ShazamKit SDK for commercial purposes? After extensive research, I could not find any official information regarding the license of the Android version of the ShazamKit SDK. Could you please provide a formal license statement?
1
0
140
Apr ’25
No mic capture on iOS 18.5
Hello! We stumbled upon a problem with our karaoke app where user on iPhone 16e/iOS 18.5 has problem with mic capture, other users cannot hear him. The mic capture is working fine on 17.5, 16.8. Maybe there is something else we need when configuring AVAudioSession for iOS 18.5? Currently it's set up like this: override func viewDidLoad() { super.viewDidLoad() UIApplication.shared.isIdleTimerDisabled = true mRoomId = appDelegate.getRoomId() let audioSession = AVAudioSession.sharedInstance() try! audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: [.defaultToSpeaker]) try! audioSession.setPreferredSampleRate(48000) try! audioSession.setActive(true, options: []) }
1
0
283
Nov ’25
"Baking together" two audio tracks into one for drag-and-drop
Hi all, with my app ScreenFloat, you can record your screen, along with system- and microphone audio. Those two audio feeds are recorded into separate audio tracks in order to individually remove or edit them later on. Now, these recordings you create with ScreenFloat can be drag-and-dropped to other apps instantly. So far, so good, but some apps, like Slack, or VLC, or even websites like YouTube, do not play back multiple audio tracks, just one. So what I'm trying to do is, on dragging the video recording file out of ScreenFloat, instantly baking together the two individual audio tracks into one, and offering that new file as the drag and drop file, so that all audio is played in the target app. But it's slow. I mean, it's actually quite fast, but for drag and drop, it's slow. My approach is this: "Bake together" the two audio tracks into a one-track m4a audio file using AVMutableAudioMix and AVAssetExportSession Take the video track, add the new audio file as an audio track to it, and render that out using AVAssetExportSession For a quick benchmark, a 3'40'' movie, step 1 takes ~1.7 seconds, and step two adds another ~1.5 seconds, so we're at ~3.2 seconds. That's an eternity for a drag and drop, where the user might cancel if there's no immediate feedback. I could also do it in one step, but then I couldn't use the AV*Passthrough preset, and that makes it take around 32 seconds then, because I assume it touches the video data (which is unnecessary in this case, so I think the two-step approach here is the fastest). So, my question is, is there a faster way? The best idea I can come up with right now is, when initially recording the screen with system- and microphone audio as separate tracks, to also record both of them into a third, muted, "hidden" track I could use later on, basically eliminating the need for step one and just ripping the two single audio tracks out of the movie and only have the video and the "hidden" track (then unmuted), but I'd still have a ~1.5 second delay there. Also, there's the processing and data overhead (basically doubling the movie's audio data). All this would be great for an export operation (where one expects it to take a little time), but for a drag-and-drop operation, it's not ideal. I've discarded the idea of doing a promise file drag, because many apps do not accept those, and I want to keep wide compatibility with all sorts of apps. I'd appreciate any ideas or pointers. Thank you kindly, Matthias
2
0
721
Mar ’25
Spatial Audio on Mac - When and how to render using Audio Units?
I'm working on adding Spatial Audio support to a game on the Mac. I'm looking at the SpatialAudioRenderer sample but having some issues. It's unclear to me when a device is compatible with Spatial Audio and when I should attempt to render Spatial Audio. There is no property that I can find on the Mac that advertises Spatial Audio compatibility on a device. The sample crashes when the output device is a USB device. This includes the Apple Studio Display. The Apple Studio Display is supposed to be capable of rendering Spatial Audio. The device doesn't work with the sample - do I still need to render down the 7.1.4 source on this device? The sample always renders down to Stereo, but the Apple Studio Display is not a Stereo device. I'm a bit confused by the sample and when/how I should configure the mixing unit.
0
0
112
4w
Keeping PiP alive during third-party video recording (camera capture)
I’m building a teleprompter-style app that relies on Picture in Picture. PiP starts correctly on device. Everything works — until another app (e.g. TikTok / Instagram) starts active video recording. When camera capture begins in the foreground app, iOS terminates my PiP session. Some teleprompter apps appear to keep PiP active while recording in other apps, so I’m trying to understand the recommended architectural pattern for this scenario. Is there a documented approach or best practice to keep PiP stable during third-party camera capture? Looking specifically for guidance on the correct AVKit / AVAudioSession configuration for this use case.
0
0
211
3w
Notification interruptions
My app Balletrax is a music player for people to use while they teach ballet. Used to be you could silence notifications during use, but now the customer seems to have to know how to use Focus mode, remember to turn it on and off, and have to check the notifications one does and doesn't want to use. Is there no way to silence all notifications when the app is in use?
0
0
111
Apr ’25
Track changes in the browser tab's audibility property.
Hi! I am writing a browser extension that allows you to control the playback of media content on a music service website. Unfortunately Safari does not support tracking changes to the audible property in an event tabs.onUpdated. Is there an alternative to this event? I'm looking for a way to track when the automatic inference engine interrupts playback on a music service website. That you.
0
0
92
Apr ’25