Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

iOS AR Kit Blendshapes on Vision OS
I work on motion capture systems for VTubing. I can't seem to find any information on gaining access to the Face Tracking features on iOS while developing for Vision OS. I would love to bring VStreamer Live to Vision OS
0
0
67
Jun ’25
High CPU with ARWorldTrackingConfiguration vs. ARBodyTrackingConfiguration using AREnvironmentTexturing
In a simple test, I'm observing ~30% higher CPU usage with the ARWorldTrackingConfiguration compared to the ARBodyTrackingConfiguration when both configurations have AREnvironmentTexturing enabled. In Instruments, I observe Recon3D consuming ~5.5 seconds of CPU time with the ARWorldTrackingConfiguration vs <0.3 second with the ARBodyTrackingConfiguration in two separate 30 seconds samples. This is on an iPhone 12 Pro equipped with lidar. Is there a reason why two separate configurations, both having the same features enabled would have a different CPU overhead?
0
0
181
Aug ’25
Material showing only half?
Hi, I'm currently implementing 180° / 360° property for immersive video in my app. I was able to implement 360° easily by just giving VideoMaterial to flipped sphere. However, I'm bit stuck at 180°. I want to implement by setting VideoMaterial to hemisphere mesh. But since RealityKit doesn't provide default function such like MeshResource.generateHemisphere yet, I just want to apply VideoMaterial half front visible, and half back transparent. I thought this would make my sphere looks like hemisphere. But I can't find my way to implement this method.. I would appreciate any advice / idea / information that might help.
0
0
99
Oct ’25
Unity PolySpatial – Live handheld camera feed of graspable objects not rendering on Vision Pro
I am developing a Unity application for the Apple Vision Pro using PolySpatial and RealityKit integration. The goal is to create a graspable object (for example, a handheld cube) that includes a secondary camera. When the user grabs and moves the object, the secondary camera should render its view to a RenderTexture, which is displayed on a quad attached to the object, simulating a live camera screen. In the Unity Editor, this setup works correctly. The RenderTexture updates in real time, and the quad displays the camera’s view as expected. However, when building and running the application on the Vision Pro, the quad only displays the clear background color of the secondary camera. No scene content appears. The graspable interaction itself works fine: the object can be grabbed and moved as intended. Steps I have taken: Created a new layer (CameraFeed) and assigned the relevant objects to it. Set the secondary camera’s culling mask to render only the CameraFeed layer. Assigned the RenderTexture as the camera’s target texture. Applied the RenderTexture to an Unlit/Texture material on a quad. Confirmed the camera is active and correctly positioned relative to the object. From my research, it appears that once objects are managed by RealityKit through PolySpatial (for example, made graspable), they are no longer rendered through Unity's normal camera pipeline. Only the main XR camera (managed by RealityKit) seems able to see these objects. Secondary Unity cameras cannot render RealityKit-synced content to a RenderTexture. If this is correct, it seems there is currently no way to implement a true live secondary camera feed showing graspable objects on Vision Pro using Unity PolySpatial. My questions are: Is there any official way to enable multiple camera rendering of RealityKit-managed objects through PolySpatial? Are there known workarounds to simulate a live camera feed that still allows objects to be grabbed? Has anyone found alternative design patterns or methods for this kind of interaction? Environment: Unity 6.0 , PolySpatial 2.2.4, Apple Vision OS XR 2.2.4 Any insight or suggestions would be greatly appreciated. Thank you.
0
0
125
Apr ’25
AR sessions fails with "Required sensor failed"
The AR based app I am working on right now is experiencing an issue. Sometimes, the AR session fails with a call to my ARSessionObserver's session(_ session: ARSession, didFailWithError error: Error) with the following error: Error Domain=com.apple.arkit.error Code=102 "Required sensor failed." NSLocalizedFailureReason="A sensor failed to deliver the required input.," NSLocalizedRecoverySuggestion="Make sure that the application has the required privacy settings." The underlying error seems to point to the CoreMotion framework: Domain=CMErrorDomain Code=102 "(null) Some people seem to have experienced this issue and solved it by making sure that the Compass Calibration switch is ON in Settings > Privacy > Location Services > System Services. For context, the ARWorldTrackingConfiguration.worldAlignment is set to .gravity The thing is it is already ON when I experience this issue. I also noticed that this issue happens way more often on the iPhone 16e than in any other device. Has anyone had similar experiences? I am looking for a way to prevent this error from happening (ideally) or handling in a way that does not affect the user. Any help is appreciated
0
1
249
Aug ’25
RoomPlan - The delegate of ARSession is retaining x ARFrames
Hi, I'm encountering an issue in our app that uses RoomPlan and ARsession for scanning. After prolonged use—especially under heavy load from both the scanning process and other unrelated app operations—the iPhone becomes very hot, and the following warning begins to appear more frequently: "ARSession <0x107559680>: The delegate of ARSession is retaining 11 ARFrames. The camera will stop delivering camera images if the delegate keeps holding on to too many ARFrames. This could be a threading or memory management issue in the delegate and should be fixed." I was able to reproduce this behavior using Apple’s RoomPlanExampleApp, with only one change: I introduced a CPU-intensive workload at the end of the startSession() function: DispatchQueue.global().asyncAfter(deadline: .now() + 5) { for i in 0..<4 { var value = 10_000 DispatchQueue.global().async { while true { value *= 10_000 value /= 10_000 value ^= 10_000 value = 10_000 } } } } I suspect this is some RoomPlan API problem that's why a filed an feedback: 17441091
0
0
271
May ’25
Cannot reassign worldTracking / planeDetection providers in my PlacementManager when switching environments
Environment Xcode: 16.2 VisionOS SDK 2.4 Swift 6.1 Targets: Apple Vision Pro (immersive space) Frameworks: ARKit, RealityKit, SwiftUI What I’m Trying to Do I have a view-model class PlacementManager that holds two AR providers: private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider I want to dynamically replace these providers in a setEnvironment(_:) method (so I can save/clear a JSON scene and restart ARKit). What’s Happening If I declare them as : private let worldTracking = WorldTrackingProvider() private let planeDetection = PlaneDetectionProvider() I get compile-errors when I later do: self.worldTracking = newWorldTracking // Cannot assign to property: 'worldTracking' is a 'let' constant If I change them to un-initialized vars: private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider then in my init() I get: self used in property access 'worldTracking' before all stored properties are initialized Code snipet @Observable final class PlacementManager : ObservableObject { private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider // … other props … @MainActor init() { // error: self.worldTracking used before init… planeAnchorHandler = PlaneAnchorHandler(rootEntity: root) persistenceManager = PersistenceManager( worldTracking: worldTracking, rootEntity: root ) // … } @MainActor func setEnvironment(env: Environnement) async { let newWorldTracking = WorldTrackingProvider() let newPlaneDetection = PlaneDetectionProvider() try await appState!.arkitSession.run( [ newWorldTracking, newPlaneDetection ] ) self.worldTracking = newWorldTracking self.planeDetection = newPlaneDetection // … } } What I’ve Tried Giving them default values at declaration (= WorldTrackingProvider()) Initializing them at the top of init() before any use Passing the new providers into arkitSession.run(...) My Question What is the recommended Swift-style pattern to declare and reassign these ARKit provider properties so that: They’re fully initialized before use in init(), and I can swap them out later in setEnvironment(...) without compiler errors? Any pointers (or links to forum threads / docs) would be greatly appreciated!
0
0
139
May ’25
MagnifyGesture in RealityView does not work on macOS
I have tested the MagnifyGesture code below on multiple devices: Vision Pro - working iPhone - working iPad - working macOS - not working In Reality Composer Pro, I have also added the below components to the test model entity: Input Target Collision For macOS, I tried the touchpad pinch gesture and mouse scroll wheel, but neither approach works. How to resolve this issue? Thank you. import SwiftUI import RealityKit import RealityKitContent struct ContentView: View { var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) } } .gesture(MagnifyGesture() .targetedToAnyEntity() .onChanged(onMagnifyChanged) .onEnded(onMagnifyEnded)) } func onMagnifyChanged(_ value: EntityTargetValue<MagnifyGesture.Value>) { print("onMagnifyChanged") } func onMagnifyEnded(_ value: EntityTargetValue<MagnifyGesture.Value>) { print("onMagnifyEnded") } }
0
0
73
Mar ’25
Possible to detect multiple images at the same time on VisionPro?
I'm working on a project that uses imageTrackingProvider through ARKit on VisionPro, and I want to detect multiple images(about 5) and show info at the same time. However, I found that it seems only 1 image could be detected by device at one time. And the api of maximumNumberOfTrackedImages doing this seems not available for visionOS but only iOS. Anyone knows possible ways to detect multiple images at the same time on VisionPro?
0
0
476
Oct ’25
RealityView attachment draw order
My visionOS 26.3 app displays a diorama-like scene in a RealityView in a mixed immersive space, about 1 meter square, with view attachments floating above the scene. Each view attachment fades out after user interaction, by animating the view's opacity. What I'm observing is that depending on the position of a view attachment relative to the scene and the camera, an unwanted cutout effect is observed (presumably because of draw order issues), as shown in the right column in the screenshots below. YouTube video link of these sequences: https://youtu.be/oTuo0okKCkc (19 seconds) My question: How does visionOS determine the view attachment draw order relative to the RealityView scene? If I better understood how the draw order is determined, I could modify my scene to ensure that the view attachments were always drawn after the scene, fixing the unwanted cutout effect. I've successfully used ModelSortGroupComponent to control the draw order of entities within the RealityView scene, but my understanding is that this approach cannot be used with view attachments. I've submitted FB22014370 about this issue. Thank you.
0
0
160
2w
360° video playback Issue
When rendering an equirectangular video on a sphere using VideoMaterial and MeshResource.generateSphere(), there is a visible black seam line running vertically on the sphere. This appears to be at the UV seam where the texture coordinates wrap from 1.0 back to 0.0. The same video file plays without any visible seam in other 360° video players on Vision Pro, so the issue is not with the video content itself. Here is the relevant code: private func createVideoSphere(content: RealityViewContent, player: AVPlayer) { let sphere = MeshResource.generateSphere(radius: 1000) let material = VideoMaterial(avPlayer: player) let entity = ModelEntity(mesh: sphere, materials: [material]) entity.scale *= .init(x: -1, y: 1, z: 1) // Flip to render on inside content.add(entity) player.play() } The setup is straightforward: MeshResource.generateSphere(radius: 1000) generates the sphere mesh VideoMaterial(avPlayer:) provides the video texture X scale is flipped to -1 so the texture renders on the inside of the sphere The video is a standard equirectangular 360° MP4 file What I've tried: I attempted to create a custom sphere mesh using MeshDescriptor with duplicate vertices at the UV seam (longitude 0°/360°) to ensure proper UV continuity. However, VideoMaterial did not render any video on the custom mesh (only audio played), and the app eventually crashed. It seems VideoMaterial may have specific mesh requirements. Questions: Is the black seam a known limitation of MeshResource.generateSphere() when used with VideoMaterial for 360° video? Is there a recommended way to eliminate this UV seam — for example, a texture addressing mode or a specific mesh configuration that works with VideoMaterial? Is there an official sample project or code example for playing 360° equirectangular video in a fully immersive space on visionOS? That would be extremely helpful as a reference. Any guidance would be greatly appreciated. Thank you!
0
0
193
2w
Vision Pro stucks on "Retrieving configuration"
Hi, after upgrading to 2.4.1 (from 1.0) my vision stucks on "Retrieving configuration" screen. Apple Store didn't support my case since it has been sold in USA and the product isn't still present in italian market. I don't have dev strap, how can I manage the issue? Thank you
0
0
116
May ’25
Playing USDZ animation at last known location of reference object
Hi there I'm using Reality Composer Pro to anchor virtual content to a .referenceobject. However moving the referenceobject quickly causes tracking to stop. (I know this is a limitation and I am trying to embrace it as a feature) Is there a way to play a USDZ animation at the last known location, after detecting that the reference object is no longer tracked? is it possible to set this up in Reality Composer pro? I'm trying to get the USDZ to play before the Virtual Content disappears (due to reference object not being located). So that it smooths out the vanishing of the content. Nearly everything is set up in Reality Composer pro with my immersive.scene just adding virtual content to the reference object which anchors it in the RCP Scene, so my immersive view just does this - if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) & this .onAppear { appModel.immersiveSpaceState = .open } .onDisappear { appModel.immersiveSpaceState = .closed } I have tried Using SpatialTracking & WorldTrackingProvider, but I'm still quite new to Swift and coding in general so im unsure how to implement in conjunction with my RCP scene and/or if this is the right way to go about it. Also I have implemented this at the beginning of object tracking. All I had to do was add a onAppear behavior to the object to play a USDZ and that works. Doing it for disappearing (due to loss of reference object) seems to be a lot harder.
0
0
117
Apr ’25
How to implement the semi-transparent overlay effect in Immersive View?
I noticed that when I drag the menu window in an Immersive View, the entities behind it becomes semi-transparent, and the boundary between virtual and real-world objects is very pronounced. May I ask how does VisionOS implement this effect? Is there any API or technique I can use in my own code to enable the same semi-transparent overlay - even when I am not dragging the menu window?
0
0
143
May ’25
多设备协同操作繁琐
直播过程中需同时操作 Vision Pro(拍摄)、Mac(推流)、中控台(画面切换),无统一控制入口,调节 3D 模型、校准画质等操作需在多设备间切换,易出错且效率低。 期望 针对直播场景,提供桌面端专属控制软件,支持一站式管理 Vision Pro 的拍摄参数、3D 模型切换、虚实融合效果等,实现多设备协同操作的可视化、便捷化。
0
0
347
Dec ’25
Is it Possible to Place a 3D Model at the Exact Position of a QR Code in AR with ARKit ?
I'm working on an iOS app using ARKit and RealityKit where I scan QR codes and want to place 3D models at the exact position of the QR code in the real world. Is it possible to accurately place a 3D model at the exact position of a QR code in AR using ARKit and RealityKit? Specifically, I want the model to appear at the precise location where the QR code is detected, rather than just somewhere in the AR space. If this is possible, could you point me in the right direction or recommend the best approach to achieve this? Thank you for your help!
0
0
131
May ’25