I first started using the SwiftUI pushWindow API in visionOS 26.2, and I've reported several bugs I discovered, listed below.
Under certain circumstances, pushed window relationships may break, and this behavior affects all other apps, not just the app that caused the problem, until the next device reboot. In other cases, the system may crash and restart.
(FB21287011) When a window presented with pushWindow is dismissed, its parent window reappears in the wrong location
(FB21294645) Pinning a pushed window to a wall breaks pushWindow for all other apps on the system
(FB21594646) pushWindow interacts poorly with the window bar close app option
(FB21652261) If a window locked to a wall calls pushWindow, the original window becomes unlocked
(FB21652271) If a window locked in place calls pushWindow and the pushed window is closed, the system freezes
(FB21828413) pushWindow, UIApplication.open, and a dismissed immersive space result in multiple failures that require a device reboot
(FB21840747) visionOS randomly foregrounds a backgrounded immersive space app with a pushed window's parent window visible instead of the pushed window
(FB21864652) When a running app is selected in the visionOS home view, windows presented with pushWindow spontaneously close
(FB21873482) Pushed windows use the fixed scaling behavior instead of the dynamic scaling behavior
I'm posting the issues here in case this information is helpful to other developers. I'd also like to hear about other pushWindow issues developers have encountered, so I can watch out for them.
Questions:
I've discovered that some of the issues above can be partially worked around by applying the defaultLaunchBehavior and restorationBehavior scene modifiers to suppress window restoration and locking, which pushWindow appears to interact poorly with. Are there other recommended workarounds?
I've observed that the Photos and Settings apps, which predate the pushWindow API, are not affected by the issues I reported. Are there other more reliable ways I could achieve the same behavior as pushWindow without relying on that API?
I'd appreciate any guidance Apple engineers could provide. Thank you.
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I am experience problem with three iPhone 13 Pro.
They are reporting the lowest quality for all points in the depthmap from the Lidar sensor.
The readings I get are unusable.
If it was just one phone I would consider it a faulty sensor, but in this case it is three phones that gives the same result.
I have other iPhone 13 Pro that works as expected.
Have any else experienced a similar behavior?
I am using iOS 18.4.1
https://developer.apple.com/documentation/avfoundation/avdepthdata/depthdataquality
Topic:
Spatial Computing
SubTopic:
ARKit
Hello, I am trying to build an AVP app for real-time "zero-latency" spatial video streaming. I am trying to figure out, on a high level, the best way to do this.
Currently this is my method:
Server sends stereo images via a WebRTC service (ie, livekit)
The WebRTC stream is converted to a CVPixelBuffer, writes them to file, plays via AVPlayer, and applies a VideoMaterial to a plane entity.
However, this is a bit hacky and it seems like this won't be compatible with Apple's spatial experinces. To my understanding, Apple supports HLS streaming for spatial experiences and APMP content. However, HLS (and even Low Latency HLS) introduces a second or more of latency, likely do to the segmentation nature of HLS. Thus, HLS will not work for us.
Some other alternatives I've thought of are streaming the live stream video via webrtc from the server to a local computer in the AVP's network, and then using LL-HLS to stream from the local computer to the vision pro. Still, it seems like this would introduce latency on the order of seconds.
Is my current approach the best way to implement this? Or could anyone suggest a better way, perhaps something compatible with AVP's spatial experiences
Topic:
Spatial Computing
SubTopic:
General
I am using ARKit to detect image in visionPro. However I met some question about adding the reference image.
Some of my images can not be added correctly sometimes. (As you can see in the picture above, the 'orange' can not be added correctly, but the 'cup' can). However, sometimes they will be added without any problem. I do not know why it will happen. And I want they all be added steadily.
it looks like one week after accepting as a nearby other AVP device... it expires
since we are providing our clients for a timeless app to walk inside archtiecture, it's a shame that not technical staff should connect every week 5 devices to work together
is there any roundabout for this issue or straight to the wishlist ?
thanks for the support !!
Problem Description:
I am developing an application that runs in the Shared Space on Apple Vision Pro using Unity. When using the UI ScrollView (Scroll View) component, I found that the Mask / RectMask2D does not function in the Shared Space.
Scrolling content is not masked or cropped; it extends beyond the view boundary and is displayed directly.
The same UI works correctly across platforms such as Unity Editor, iOS, and macOS, but the issue only occurs in the shared space of Vision Pro.
Reproduction steps:
Create a ScrollView in Unity.
Add a Mask or RectMask2D to the viewport.
Deploy the application to Apple Vision Pro and run it in Shared Space mode.
Sliding content will not be clipped by the mask, and the masked area is entirely ineffective.
Expected behavior:
The content of ScrollView should be properly clipped by Mask / RectMask2D and should not render outside the mask boundary.
Actual results:
In the shared space of Vision Pro, the mask is ineffective, causing scrolling content to extend beyond the designated area and resulting in severe UI distortion.
Environmental Information:
Device: Apple Vision Pro
Mode: Shared Space
Unity Version: 6000.0.40f1
visionOS version: visionOS 26.0
Unity PolySpatial Version: 2.0.4
Impact
This issue causes Unity UI to fail to display correctly on Vision Pro, preventing ScrollView from properly clipping content, which impacts the UI experience and interaction effects in practical applications.
Expected Result: When running a Unity app in the shared space of visionOS, the Mask / RectMask2D of ScrollView functions correctly
How can I create 180-degree apple immersive videos using game engine
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Hi. I am mixing content destined for Vision Pro. Locked to video. I have the AAX installer and the ASAF video player demonstrated in the quicktimes is nit included in the install package for pro tools. Would it be possible to post a link ?
I want to let users place 2D/3D “artworks” on detected walls and have them reappear in exactly the same real‑world spot after quitting and relaunching the app (like widgets do, but for my own entities).Environment:
Xcode 26, visionOS 2.0, RealityKit + ARKitSession/WorldTrackingProvider
Entities are parented to a holder that’s aligned to a wall via plane/mesh raycasts
What I’ve tried:
Create a WorldAnchor at placement, save UUID + full 4×4 transform
On next launch, re-create the WorldAnchor (or set the saved transform) and attach the entity
Gate restore on relocalization/mesh updates and disable all raycast/search after restore
Issue:
After relaunch, placement still resolves relative to current device pose, not the same wall position.
Questions:
Is there a public API in visionOS 2.0 to persist app‑managed world anchors across sessions (room‑fixed), e.g., AnchorStore or equivalent?
If not, what’s the recommended pattern to reliably restore wall‑anchored content?
Are persistence features mentioned for widgets/windows available to third‑party RealityKit entities?
Topic:
Spatial Computing
SubTopic:
General
I'm currently implementing 180° / 360° immersive video for my app.
I easily implemented 360° by just applying VideoMaterial to flipped sphere.
But I'm stuck at 180°. I'm trying to implement by applying VideoMaterial to hemisphere (half sphere). I want to make VideoMaterial to be visible half front sphere and half back sphere transparent / clear.
Would there be any advice / information / idea to implement this? Your help would be grateful.
I want to implement the functions in this video, how should I set the window
Hello, I've pre-ordered the Logitech Muse with hopes of developing with it, but have yet to find any documentation relating to the capabilities it will have/any APIs that will be available to take advantage of the Muse. Is anyone aware of what might become available?
Thank you in advance.
Greetings. I am having this issue with a Unity Polyspatial VisionOS app.
We have our main Bounded Volume for our app.
We have other Native UI windows that appear when we interact with objects in our Bounded Volume.
If a user closes our main Bounded Volume...sometimes it quits the app. Sometimes it doesn't.
If we go back to the home screen and reopen the app, our main Bounded Volume doesn't always appear, and just the Native UI windows we left open are visible. But, we can sometimes still hear sounds that are playing in our Bounded Volume.
What solutions are there to make sure our Bounded Volume always appears when the app is open?
That title would have made a great WWDC Sessions. Unfortunately, it seems like nothing is new in Reality Composer Pro this year. I've noticed at all versions of the Xcode Beta this summer have shipped with Reality Composer Pro version 2.0. There have been slight bumps in the build number. I haven't found any new features or seen any documentation to indicate that anything has changed.
So the question is, what is the state of Reality Composer Pro? Should we continue to use this tool or start doing everything in code? A huge number of Sample Projects use Reality Composer Pro, so it seems like Apple is still using it even if they didn't update it this year.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
The initial startup of visionOS 26 after install is glacially slow.
I see no way to scale an entity with a hover effect.
The closest I can find is by using HoverEffectComponent with a shader hover effect. Maybe I can change the scale with a ShaderGraph, but I cannot figure out how.
On Xcode 26 and visionOS 26, apple provides observable property for Entity, so we can easily interact with Entity between RealityScene and SwiftUI, but there is a issue:
It's fine to observe Entity's position and scale properties in Slider, but can't observe orientation properties in Slider.
MacBook Air M2 / Xcode 26 beta6
We are building an AR experience for deployment on iphones. We are using Unity but it looks as if Reality Composer Pro has better features for spatial audio. I am not sure if Reality Composer Pro can only be used for Vision Pro or can it also be used for deployment on Iphone or ipad.
I have been trying to implement this look where a component looks "pushed in" but I could not find any resources regarding this effect. The closest I got was a combination of a RoundedRectangle and .glassBackgroundEffect(), but this makes the view look pushed out, instead of pushed in.
I was wondering if this is achievable in SwiftUI level, or even in UIKit level.
I want to render a 3d/stereoscopic video in an Apple Vision Pro window using RealityKit/RealityView. The video is a left-right stereo. The straight forward approach would be to spawn a quad, and give it a custom Shader Graph material, which has a CameraIndexSwitch. The CameraIndexSwitch chooses between the right texture vs the left texture.
https://i.sstatic.net/XawqjNcg.png
The issue I have here is that I have to extract the video frames from my AVSampleBufferVideoRenderer. This should work ok, but not if I'm playing FairPlay content.
So, my question is, how to render stereo FairPlay videos in a SwiftUI RealityView?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Metal
MetalKit
RealityKit
AVFoundation