Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Is it possible to live render CMTaggedBuffer / MV-HEVC frames in visionOS?
Hey all, I'm working on a visionOS app that captures live frames from the left and right cameras of Apple Vision Pro using cameraFrame.sample(for: .left/.right). Apple provides documentation on encoding side-by-side frames into MV-HEVC spatial video using CMTaggedBuffer: Converting Side-by-Side 3D Video to MV-HEVC My question: Is there any way to render tagged frames (e.g. CMTaggedBuffer with .stereoView(.leftEye/.rightEye)) live, directly to a surface in RealityKit or Metal, without saving them to a file? I’d like to create a true stereoscopic (spatial) live video preview, not just render two images side-by-side. Any advice or insights would be greatly appreciated!
2
0
251
Aug ’25
VisionOS 2 - Screen Capture with passthrough
We're trying to switch from using main camera access on Arkit to screen-capture with passthrough however we're facing some issues and it seems a bit complicated to debug. We have set up a broadcast Extension, set up some logs on the sample Handler but we get nothing in the console nor that the recording starts, we set up the picker as well and we can see our extension in the control center as one of the choices but clicking start, results in it stopping in less than one second after. The only message that is rather contradictory we see in the console.app is the following [INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1333 Extension has passthrough license and just right after [INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1336 Extension does not have passthrough license
2
1
563
Dec ’25
Person Occlusion on the Vision Pro
I am currently creating an app where two people share an instance of an immersive space so that they are able to point to certain things in the immersive space. Right now, other people are hidden behind the immersive space, and even with people awareness enabled for everything, people are still too difficult to see. I've found this documentation (https://developer.apple.com/documentation/arkit/occluding-virtual-content-with-people) which describes what I want to do, but it is only listed as working on iOS an iPadOS. Is there anything similar to this that will work on VisionOS?
2
0
143
Mar ’25
Roomplan exceeded scene size limit error. (RoomCaptureSession.CaptureError.exceedSceneSizeLimit)
Error: RoomCaptureSession.CaptureError.exceedSceneSizeLimit Apple Documentation Explanation: An error that indicates when the scene size grows past the framework’s limitations. Issue: This error is popping up in my iPhone 14 Pro (128 GB) after a few roomplan scans are done. This error shows up even if the room size is small. It occurs immediately after I start the RoomCaptureSession after the relocalisation of previous AR session (in world tracking configuration). I am having trouble understanding exactly why this error shows and how to debug/solve it. Does anyone have any idea on how to approach to this issue?
2
1
1.3k
Dec ’25
How to configure Spatial Audio on a Video Material?, Compile error.
I've tried following apple's documentation to apply a video material on a Model Entity, but I have encountered a compile error while attempting to specify the Spatial Audio type. It is a 360 video on a Sphere which plays just fine, but the audio is too quiet compared to the volume I get when I preview the video on Xcode. So I tried tried to configure audio playback mode on the material but it gives me a compile error: "audioInputMode' is unavailable in visionOS audioInputMode' has been explicitly marked unavailable here RealityFoundation.VideoPlaybackController.audioInputMode)" https://developer.apple.com/documentation/realitykit/videomaterial/ Code: let player = AVPlayer(url: url) // Instantiate and configure the video material. let material = VideoMaterial(avPlayer: player) // Configure audio playback mode. material.controller.audioInputMode = .spatial // this line won’t compile. VisionOS 2.4, Xcode 16.4, also tried Xcode 26 beta 2. The videos are HEVC MPEG-4 codecs. Is there any other way to do this, or is there a workaround available? Thank you.
2
0
253
Jul ’25
Apply mesh to real world people.
As far as I know, Apple hasn’t opened access to the Vision Pro camera for developers yet, so I’m trying to find possible workarounds within the current capabilities. I’m wondering if there’s any way to apply a mesh to a person in the scene in Vision Pro, or if there’s an alternative approach to roughly detect a human shape in front of the user?
2
0
112
Apr ’25
Why VideoMaterial can't show transparency on Apple Vision Pro
https://developer.apple.com/documentation/realitykit/videomaterial The documentation: "Video materials support transparency if the source video’s file format also supports transparency." I have a transparency video(Hand.mov, HEVC with alpha), I can show the video with transparency background correctly on Vision Pro Simulates, but on physic Device the video has a black background. I'm sure the video format is ok because I can see get the texture from video and display it on an UnlitMaterial. How can I show the transparency video correctly with the RealityKit/VideoMaterial?
2
0
269
Dec ’25
In visionOS 26.2 RC, pushWindow + dismissWindow is broken
I recently added pushWindow to my app, and I discovered that in visionOS 26.2 RC (23N301), pushWindow followed by dismissWindow no longer works as expected. Specifically, if the user moves the pushed window, then when the pushed window is later dismissed, the parent window's position isn't aligned with the pushed window's new position. Its original position is restored instead. Curiously, the bug only happens when an app is launched from the visionOS home view, and not when an app is launched from Xcode. It also doesn't happen in the visionOS 26.2 simulator. Another interesting detail is that while the parent window is hidden, if the user long-presses the Digital Crown and then dismisses the pushed window, the parent window's position seems to be immune from the Digital Crown scene reorientation. It's restored to its original real world position. Demo video: https://youtu.be/zR3t2ON3Wz0 I've submitted feedback as FB21287011 with a sample app and detailed repro steps. Has anyone else encountered this issue already and figured out a workaround? It would be nice if I could get pushWindow to work correctly in my app. Thanks everybody! 😀
2
2
559
Dec ’25
RealityKit fullscreen layer
Hi! I'm currently trying to render another XR scene in front of a RealityKit one. Actually, I'm anchoring a plane to the head with a shader to display for left/right eye side-by-side images. By default, the camera has a near plane so I can directly draw at z=0. Is there a way to change the camera near plane? Or maybe there is a better solution to overlay image/texture for left/right eyes? Ideally, I would layer some kind of CompositorLayer on RealityKit, but that's sadly not possible from what I know. Thanks in advance and have a good day!
2
0
325
Jul ’25
CameraFrameProvider distortion correction
Hi, I'm trying to correct the lens distortion in frames provided by Enterprise API camera frame provider. The frames provided seem to have only in/extrinsics info, but not the distortion lookup table. Is there some magic setting, or function to do that (I can't seem to find anything like this)? Or is there a way to use AVCameraCalibrationData together with provider?
2
0
447
Mar ’25
Attaching a hand model to your hands
Hi, we have been working on an application that attaches a hand model to the users hands. Apple provides an animating hand models in visionOS project that is a useful starting point. https://developer.apple.com/documentation/visionOS/animating-hand-models-in-visionOS We have been trying to create our own hand model to attach but have had some issues with how it is attaching to the hand. For our hand model we want to include the forearm all the way up to the users elbow. I have attached a sample project of what our code currently looks like so you can run it. Just select show immersive space to attach the models. The left hand model is the space glove that we were trying to mirror. The right hand model is our model that we have been using. I have mapped each of the joints to the pertaining joint name on our model. The first issue we are having seems to be based around the placement of the forearm. It attaches itself at the wrist. The second issue seems to be around rotation. Our team is looking for some guidance on what needs to change in order to map this model correctly. Thanks in advance!
2
0
137
4d
Reality Composer to Reality Composer Pro?
I sketched a idea for a project in Reality Composer on my iPad, thinking when I had a chance to sit down I would work it up in Xcode. However, when I got back to my computer, I discovered I cannot open a file created in Reality Composer (or the exported Reality file) in Reality Composer Pro. Am I missing something obvious here, because this seems like a huge oversight. If anyone, can let me know how to open a file created in Reality Composer in Reality Composer Pro, I would greatly appreciate it. Partly, because there seems to be objects available in Reality Composer that are not in Reality Composer Pro. Thanks Stan
2
1
339
Jun ’25
CompositorServices Or RealityKit
I have been concentrating on developing the visionOS application. While I am currently quite familiar with RealityKit, CompositorServices has also captured my attention. I have not yet acquired knowledge of CompositorServices. Could you please clarify whether it is essential for me to learn CompositorServices? Additionally, I would appreciate it if you could provide insights into the advantages of RealityKit and CompositorServices.
2
0
797
Mar ’25
VisionPro camera frame rate
Hi, I'm working with CameraFrameProvider from Enterprise API. Is it always capped at 30fps, or is there something I can switch to get more? I assume it is capped at 30, so let me cram in additional question here :). If I'd get a developer strap and attach an external camera capable of doing >30fps, will I get the full stream, or some other limitation will kick in?
2
0
131
Apr ’25
ModelEntity position values are stale after world recenter (Crown long press)
Hi everyone, I’m working with RealityKit on visionOS and I’m seeing unexpected behavior when the user long-presses the Digital Crown, which recenters the world. Observed behavior: When the world is recentered via long-pressing the Crown, the models remain visually in the correct place (as expected). However, if I query the model’s position or transform immediately after recentering (e.g. entity.position or similar), I still get the old values from before recenter. As soon as I interact with the model using a gesture (drag/rotate/scale), the position updates and then querying it returns the correct, updated values. So effectively: Recenter happens Visual position is correct Programmatic position remains stale First gesture causes the position to “snap” to the correct updated value Questions: Is there any event, notification, or callback that fires when the world is recentered due to a long press of the Crown button? Is there a recommended way to get the updated world-space transform immediately after recenter, without waiting for a gesture? Is this expected behavior due to deferred/lazy transform updates in RealityKit? Right now it feels like recentering updates the coordinate system but doesn’t immediately commit new transform values to entities until some interaction occurs. Any guidance or best-practice patterns for handling this would be appreciated. Thanks!
2
0
648
3w
How to integrate Apple Immersive Video into the app you are developing.
Hello, Let me ask you a question about Apple Immersive Video. https://www.apple.com/newsroom/2024/07/new-apple-immersive-video-series-and-films-premiere-on-vision-pro/ I am currently considering implementing a feature to play Apple Immersive Video as a background scene in the app I developed, using 3DCG-created content converted into Apple Immersive Video format. First, I would like to know if it is possible to integrate Apple Immersive Video into an app. Could you provide information about the required software and the integration process for incorporating Apple Immersive Video into an app? It would be great if you could also share any helpful website resources. I am considering creating Apple Immersive Video content and would like to know about the necessary equipment and software for producing both live-action footage and 3DCG animation videos. As I mentioned earlier, I’m planning to play Apple Immersive Video as a background in the app. In doing so, I would also like to place some 3D models as RealityKit entities and spatial audio elements. I’m also planning to develop the visionOS app as a Full Space Mixed experience. Is it possible to have an immersive viewing experience with Apple Immersive Video in Full Space Mixed mode? Does Apple Immersive Video support Full Space Mixed? I’ve asked several questions, and that’s all for now. Thank you in advance!
2
1
910
Nov ’25