We were having an issue wrb the system rotate and scale gestures (two-handed gestures / RotateGesture3D and MagnifyGesture) were extremely difficult to register (make work) in the visionOS simulator.
The solution we found was to:
Launch your app in the simulator
Move the pointer on top of the 3D object for which you are testing rotation and scaling gestures.
Press and hold the Option key to display touch points (ie: the two-handed gesture points).
While maintaining the option key pressed, release the pointer and re-enable it again. I am using a track pad with tap-to-click enabled and three-finger to drag enabled in accessibility, so "release the pointer and re-enable it again" translates simply to removing the three finger and placing them again on the trackpad.
If you have maintained the option key pressed, then you should now be able to rotate and scale the 3D object.
Context if you are interested:
Our issue was also occurring in Apple's own sample project relating to gestures "Transforming RealityKit entities using gestures", at below link.
On Apple's article "Interacting with your app in the visionOS simulator" at the below link, for two-handed gestures it states "Press and hold the Option key to display touch points. Move the pointer while pressing the Option key to change the distance between the touch points. Move the pointer and hold the Shift and Option keys to reposition the touch points."
This simply did not work anymore for rotation and scaling gestures.
These gestures used to be a lot more responsive in Sonoma. Either the article should be updated to what I described above, or there is an issue. Our colleague who is using macOS Sonoma 14.6.1 with the latest release of Xcode is not having these issues.
Here is the list of configurations (troubleshooting we tried!) where it is difficult to achieve rotation and scaling gestures in the visionOS simulator:
macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.1
macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.0
macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.1
macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.0
macOS Sequoia 16.1 Beta, remove all Xcodes and installed the build from AppStore (Xcode 16.1)
macOS Sequoia 16.1 Beta, Xcode 16.0 w visionOS 2.0
completely wiped out, and reset entire development machine, re-installed latest releases of sequoia (15.1) and xcode (15.1))
Throughout these troubleshooting I often:
restarted both xcode and sim
erased all derived data
erased all contents and settings from sims
performed fresh git clones
None of the above worked, only the workaround described above works atm. As you can maybe deduce, it was very time consuming to find the workaround, we also wasted some development effort thinking our gesture development was no-good.
Hopefully this will help other devs.
Article Link:
https://developer.apple.com/documentation/xcode/interacting-with-your-app-in-the-visionos-simulator
Gesture sample project link:
https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi there,
I'm trying to merge the mesh anchor into a single mesh, but couldn't find any resources on this. Here is the code where I make the mesh from each mesh anchor, and assigned it to a model component with a shader graph material.
func run(_ sceneRec: SceneReconstructionProvider) async {
for await update in sceneRec.anchorUpdates {
switch update.event {
case .added, .updated:
// Get or create entity for this anchor
let anchorEntity = anchors[update.anchor.id] ?? {
let entity = ModelEntity()
root?.addChild(entity)
anchors[update.anchor.id] = entity
return entity
}()
// Remove any existing children
for child in anchorEntity.children {
child.removeFromParent()
}
// Generate the mesh from the anchor
guard let mesh = try? await MeshResource(from: update.anchor) else { return }
guard let shape = try? await ShapeResource.generateStaticMesh(from: update.anchor) else { continue }
print("Mesh added, vertices: \(update.anchor.geometry.vertices.count), bounds: \(mesh.bounds)")
// Get the material to use
var material: RealityKit.Material
if isMaterialLoaded, let loadedMaterial = self.shaderMaterial {
material = loadedMaterial
} else {
// Use a temporary material until the shader loads
var tempMaterial = UnlitMaterial()
tempMaterial.color = .init(tint: .purple.withAlphaComponent(0.5))
material = tempMaterial
}
await MainActor.run {
anchorEntity.components.set(ModelComponent(mesh: mesh, materials: [material]))
anchorEntity.setTransformMatrix(update.anchor.originFromAnchorTransform, relativeTo: nil)
// Add collision component with static flag - required for spatial interactions
anchorEntity.components.set(CollisionComponent(
shapes: [shape],
isStatic: true,
filter: .default
))
// Make entity interactive - enables spatial taps, drags, etc.
anchorEntity.components.set(InputTargetComponent())
let shadowComponent = GroundingShadowComponent(
castsShadow: true,
receivesShadow: true
)
anchorEntity.components.set(shadowComponent)
}
I then use a spatial tap gesture to set the position parameter in the shader graph material that creates a nice gradient from the tap position on the mesh to the rest of the mesh.
SpatialTapGesture()
.targetedToAnyEntity()
.onEnded { value in
let tappedEntity = value.entity
// Check if the tapped entity is a child of tracking.meshAnchors
if isChildOfMeshAnchors(entity: tappedEntity) {
// Get local position (in the entity's coordinate space)
let localPosition = value.location3D
// Convert to world position (scene coordinate space)
let worldPosition = value.convert(localPosition, from: .local, to: .scene)
print("Tapped mesh anchor at local position: \(localPosition)")
print("Tapped mesh anchor at world position: \(worldPosition)")
// Update the material parameter with the tap position
updateMaterialTapPosition(entity: tappedEntity, position: worldPosition)
} else {
print("Tapped entity is not a mesh anchor")
}
}
}
My issue is that because there are several mesh anchors, the gradient often gets cut off by the edge of the mesh generated from the mesh anchor as suppose to a nice continuous gradient across the entire scene reconstructed mesh I couldn't find any documentations on how to merge mesh from mesh anchors, any tips would be helpful! Thank you!
Hi, I'm developing a virtual camera system using ReplayKit to capture scene video by directly accessing raw video buffers. The capture mechanism works flawlessly when repeatedly starting and stopping video capture within a continuous immersive environment. However, a critical issue arises when interrupting the immersive space:
Step 1: Enter immersive environment and start and stop capture videos(Multiple times with no issues)
Step 2: Press the crown button to exit the immersive environment
Step 3: Return to the immersive space subsequently
Step 4: Attempt to start the video capture
At this point, the startCapture method throws an unexpected error, disrupting the video capture workflow.
This is the Xcode error that I see " [ERROR] -[RPScreenRecorder startCaptureWithHandler:completionHandler:]_block_invoke_2:500 failed to start due to error: Error Domain=com.apple.ReplayKit.RPRecordingErrorDomain Code=-5803 "Recording failed to start" UserInfo={NSLocalizedDescription=Recording failed to start}"
I have tried all possible ways to stopCapture including OnDisappear and other methods and nothing seems to solve this.
Hello,
I am experimenting with Unity to develop a mixed reality (MR) application for visionOS. I would like to understand the best approach for structuring my project:
Should I build the entire experience in Unity (both Windows and Volumes)?
Or is it better to create only certain elements (e.g., Volumes) in Unity while managing Windows separately in Xcode?
Also, how well do interactions (e.g pinch, grab…) created in Unity integrate with Xcode?
If I use the PolySpatial plugin, does that allow me to manage all interactions entirely within Unity, or would I still need to handle/integrate part of it in Xcode?
What's worked best for you? Please let me know if you have any recommendations, Thanks!
Topic:
Spatial Computing
SubTopic:
General
Tags:
Vision
Reality Composer Pro
visionOS
iPad and iOS apps on visionOS
Hi guys,
I noticed that Apple created a really engaging visual effect for browsing spatial videos in the app. The video appears embedded in glass panel with glowing edges and even shows a parallax effect as you move around. When I tried to display the stereo video using RealityView, however, the video entity always floats above the panel.
May I ask how does VisionOS implement this effect? Is there any approach to achieve this effect or example code I can use in my own code.
Thanks!
Since only the user can take a screenshot using the Apple Vision Pro's top buttons, the only workaround available to an immersive app that needs a screenshot to document the user's creative interior design choices is
ask the user to take a screenshot
wait until the user taps a button indicating the screenshot has been taken
then the app asks the user to select the screenshot when the app opens the PhotoPicker
when the user presses Done, the screenshot is handed off to the app.
One wonders why there is no Apple Api for doing this in a simple privacy protective way such as:
When called, the Apple api captures the screenshot in Apple secured memory
The api displays the screenshot to the user with appropriate privacy warnings and asks if the user wants to
a. share this screenshot with the app, or
b. cancel,
c. retake the screenshot
If the user approves, the app receives the screenshot
We have been using RoomPlan in our app for 2+ years. Through a combination of in-app and manual coaching on scanning best practices, most users are able to achieve high-quality scans on a consistent basis.
In recent weeks, however, we have observed an increase in reports of degraded scanning performance, even from veteran users who had not previously encountered issues. The RoomCaptureView overlay is jittery and crooked, and the resulting scan file has significant issues, even for simple, well-lit rectangular rooms.
It is difficult to troubleshoot these issues given the number of variables at play, and the overall volume of reports is still relatively low, but we'd appreciate any guidance on known issues or workarounds that could help unblock our users who are being affected by this. I noticed that this post includes an acknowledgement of FB14454922 and FB15035788. Our issues seem slightly different as the scans are simply inaccurate and jittery without failing outright. I haven't found any other threads on similar issues.
Hi! I have been struggling with this for a little while and most of what I've found has not helped much. I hope to find more success here.
Essentially, I have a model I've made in Blender. I rigged it using Auto Rig Pro, and I've also used ARP to add a Mixamo animation to it. That all works fine in Blender. However, when I try to import this model into RCP, I don't get the animation. The "default subtree animation" is completely empty. I attribute this to my lack of experience in this field but here's what I've attempted thus far:
Pushing the Mixamo keyframes into an NLA strip. I'm pretty sure this is the correct line of action, but I'm definitely not doing something right.
Baking the animation (?)
I've made sure that I have animation checked when I export the model!
Any ideas or reference projects would be lovely. I haven't really found much that has pushed me in the right direction. This project is, unfortunately, kind of time sensitive, so I would appreciate help ASAP. Thank you and let me know if I can add anymore context!
Hi!
I attempted to run a sample project for detecting human pose in photos, which can be found here:
https://developer.apple.com/documentation/vision/detecting-human-body-poses-in-3d-with-vision
The project works perfectly when run on my Macbook Pro M1, but it fails on Apple Vision Pro. After selecting the photo an endless loading screen is presented and the following output is produced in the console:
Failed to initialize 2D Detection Algorithm.
Failed to initialize 2D Pose Estimation Algorithm.
Failed to initialize algorithm modules
Network path is nil: (null)
Failed to initialize 2D Detection Algorithm.
Failed to initialize 2D Pose Estimation Algorithm.
Failed to initialize algorithm modules
Unable to perform the request: Error Domain=com.apple.Vision Code=9 "Async status object reported as failed but without an error" UserInfo={NSLocalizedDescription=Async status object reported as failed but without an error}.
de-activating session 70138 after timeout
It seems that VNDetectHumanBodyPose3DRequest is failing on Vision Pro for some reason. Are there any additional requirements for running Vision framework on VisionOS, that I might be missing?
In several visionOS apps, we readjust our scenes to the user's eye level (their heads). But, we have encountered issues whereby the WorldTrackingProvider returns bad/incorrect positions for the first x number of frames.
See below code which you can copy paste in any Immersive Space. Relaunch the space and observe the numberOfBadWorldInfos value is inconsistent.
a. what is the most reliable way to get the devices's position?
b. is this indeed a bug?
c. are we using worldInfo improperly?
d. as a workaround, in our apps we set to 10 the number of frames to let pass before using worldInfo, should we set our threshold differently?
import ARKit
import Combine
import OSLog
import SwiftUI
import RealityKit
import RealityKitContent
let SUBSYSTEM = Bundle.main.bundleIdentifier!
struct ImmersiveView: View {
let logger = Logger(subsystem: SUBSYSTEM, category: "ImmersiveView")
let session = ARKitSession()
let worldInfo = WorldTrackingProvider()
@State var sceneUpdateSubscription: EventSubscription? = nil
@State var deviceTransform: simd_float4x4? = nil
@State var numberOfBadWorldInfos = 0
@State var isBadWorldInfoLoged = false
var body: some View {
RealityView { content in
try? await session.run([worldInfo])
sceneUpdateSubscription = content.subscribe(to: SceneEvents.Update.self) { event in
guard let pose = worldInfo.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) else {
return
}
// `worldInfo` does not return correct values for the first few frames (exact number of frames is unknown)
// - known SO: https://stackoverflow.com/questions/78396187/how-to-determine-the-first-reliable-position-of-the-apple-vision-pro-device
deviceTransform = pose.originFromAnchorTransform
if deviceTransform!.columns.3.y < 1.6 {
numberOfBadWorldInfos += 1
logger.warning("\(#function) \(#line) deviceTransform.columns.3.y \(deviceTransform!.columns.3.y), numberOfBadWorldInfos \(numberOfBadWorldInfos)")
} else {
if !isBadWorldInfoLoged {
logger.info("\(#function) \(#line) deviceTransform.columns.3.y \(deviceTransform!.columns.3.y), numberOfBadWorldInfos \(numberOfBadWorldInfos)")
}
isBadWorldInfoLoged = true // stop logging.
}
}
}
}
}
I'm using ARKitSession and PlaneDetectionProvider to detect planes. I have a basics process to create an entity for each detected plane. Each one will get a random color for the material.
Each plane is sized based on the bounds of the anchor provided by ARKit.
let mesh = MeshResource.generatePlane(
width: anchor.geometry.extent.width,
depth: anchor.geometry.extent.height
)
Then I'm using this to position each entity.
entity.transform = Transform(matrix: anchor.originFromAnchorTransform)
This seems to be the right method, but many (not all) planes are not where they should be. The sizes look OK, but the X and Y positions off.
Take this large green plane on the wall. It should span the entire wall, but it is offset along the X position so that it is pushed to the left from where the center of the anchor is.
When I visualize surfaces using the Xcode debugging tools, that tool reports the planes where I'd expect them to be.
Can you see what I'm getting wrong here? Full code below
struct Example068: View {
@State var session = ARKitSession()
@State private var planeAnchors: [UUID: Entity] = [:]
@State private var planeColors: [UUID: Color] = [:]
var body: some View {
RealityView { content in
} update: { content in
for (_, entity) in planeAnchors {
if !content.entities.contains(entity) {
content.add(entity)
}
}
}
.task {
try! await setupAndRunPlaneDetection()
}
}
func setupAndRunPlaneDetection() async throws {
let planeData = PlaneDetectionProvider(alignments: [.horizontal, .vertical, .slanted])
if PlaneDetectionProvider.isSupported {
do {
try await session.run([planeData])
for await update in planeData.anchorUpdates {
switch update.event {
case .added, .updated:
let anchor = update.anchor
if planeColors[anchor.id] == nil {
planeColors[anchor.id] = generatePastelColor()
}
let planeEntity = createPlaneEntity(for: anchor, color: planeColors[anchor.id]!)
planeAnchors[anchor.id] = planeEntity
case .removed:
let anchor = update.anchor
planeAnchors.removeValue(forKey: anchor.id)
planeColors.removeValue(forKey: anchor.id)
}
}
} catch {
print("ARKit session error \(error)")
}
}
}
private func generatePastelColor() -> Color {
let hue = Double.random(in: 0...1)
let saturation = Double.random(in: 0.2...0.4)
let brightness = Double.random(in: 0.8...1.0)
return Color(hue: hue, saturation: saturation, brightness: brightness)
}
private func createPlaneEntity(for anchor: PlaneAnchor, color: Color) -> Entity {
let mesh = MeshResource.generatePlane(
width: anchor.geometry.extent.width,
depth: anchor.geometry.extent.height
)
var material = PhysicallyBasedMaterial()
material.baseColor.tint = UIColor(color)
let entity = ModelEntity(mesh: mesh, materials: [material])
entity.transform = Transform(matrix: anchor.originFromAnchorTransform)
return entity
}
}
Topic:
Spatial Computing
SubTopic:
ARKit
We seen to have found an issue when using the pushWindow action on visionOS. The issue occurs if the app is backgrounded then reopened by selecting the apps icon on the home screen. Any window that is opened via the pushWindow action is then dismissed. We've been able to replicate the issue in a small sample project.
Replication steps
Open app
Open window via the push action
Press the digital crown
On the home screen select the apps icon again
The pushed window will now be dismissed.
There is a sample project linked here that shows off the issue, including a video of the bug in progress
Hello,
Thank you for your time. I have a question regarding visionOS app development.
When placing a SwiftUI TextField inside RealityView.attachments, we found that focusing on the field does not bring up the virtual keyboard in front of the user. Instead, the keyboard appears around the user’s lower abdomen area.
However, when placing the same TextField in a regular SwiftUI layer outside of RealityView, the keyboard appears in the correct position as expected. This suggests that the issue is specific to RealityView.attachments.
We are currently exploring ways to have the virtual keyboard appear directly in front of the user when using TextField inside RealityViewAttachments. If there is any method to explicitly control the keyboard position or any known workarounds—including alternative UI approaches—we would greatly appreciate your guidance.
Best regards,
Sadao Tokuyama
Hello, I'm struggling trying to interact with a RealityKit entity nested (or at least visually nested) in a second one.
I think I've the same issue as mentioned on this StackOverflow post, but I can't manage to reproduce the solution. https://stackoverflow.com/questions/79244424/how-to-prioritize-a-specific-entity-when-collision-boxes-overlap-in-realitykit
What I'd like to achieve is to translate the red box using a DragGesture, while still be able to interact using a TapGesture on the sphere. Currently, I can only do one at a time.
Does anyone know the solution?
extension CollisionGroup {
static let parent: CollisionGroup = CollisionGroup(rawValue: 1 << 0)
static let child: CollisionGroup = CollisionGroup(rawValue: 1 << 1)
}
struct ImmersiveView: View {
var body: some View {
RealityView { content in
let boxMesh = MeshResource.generateBox(size: 0.35)
let boxMaterial = SimpleMaterial(color: .red.withAlphaComponent(0.25), isMetallic: false)
let boxEntity = ModelEntity(mesh: boxMesh, materials: [boxMaterial])
let sphereMesh = MeshResource.generateSphere(radius: 0.05)
let sphereMaterial = SimpleMaterial(color: .blue, isMetallic: false)
let sphereEntity = ModelEntity(mesh: sphereMesh, materials: [sphereMaterial])
content.add(sphereEntity)
content.add(boxEntity)
boxEntity.components.set(InputTargetComponent())
boxEntity.components.set(
CollisionComponent(
shapes: [ShapeResource.generateBox(size: SIMD3<Float>(repeating: 0.35))],
isStatic: true,
filter: CollisionFilter(
group: .parent,
mask: .parent.subtracting(.child)
)
)
)
sphereEntity.components.set(InputTargetComponent())
sphereEntity.components.set(HoverEffectComponent())
sphereEntity.components.set(
CollisionComponent(
shapes: [ShapeResource.generateSphere(radius: 0.05)],
isStatic: true,
filter: CollisionFilter(
group: .child,
mask: .child.subtracting(.parent)
)
)
)
}
}
}
Hello,
If you add a ManipulationComponent to a RealityKit entity and then continue to add instructions, sooner or later you will encounter a crash with the following error message:
Attempting to move entity “%s” (%p) under “%s” (%p), but the new parent entity is currently being removed. Changing the parent/child entities of an entity in an event handler while that entity is already being reassigned is not supported.
CoreSimulator 1048 – Device: Apple Vision Pro 4K (B87DD32A-E862-4791-8B71-92E50CE6EC06) – Runtime: visionOS 26.0 (23M336) – Device Type: Apple Vision Pro
The problem occurs precisely with this code:
ManipulationComponent.configureEntity(object)
I adapted Apple's ObjectPlacementExample and made the changes available via GitHub.
The desired behavior is that I add entities to ManipulationComponent and then Realitiykit runs stably and does not crash randomly.
GitHub Repo
Thanks
Andre
Hi everyone,
I'm creating an educational App that allows doing computational design in an immersive environment with the Vision Pro. The App is free and can be found here:
https://apps.apple.com/us/app/arcade-topology/id6742103633
The problem I have is that the mesh of voxels I currently create use ModelEntity and I recently read that this is horrible for scalability. I already start to see issues when I try to use thousands of voxels. I also read somewhere that I should then take advantage of GPUs and use metal to that end. I was wondering if someone could point me to a tutorial or article that discusses this. In essence, I need to create a 3D voxel mesh, and those voxels have to update their opacity within an iterative loop.
Thanks!
—Alejandro
I am using AccessoryTrackingProvider from ARKit to get the transform of the PSVR2 controller via originFromAnchorTransform of the AccessoryAnchor. I also am trying to use AnchorEntity on the controller using RealityKit
However, none of the three options for Accessory.LocationName, which should be used to define the AnchorEntity target, seem to match the position on the controller which is being sent from ARKit.
The picture attached is showing two transforms:
RealityKit - using .gripSurface to define the AnchoringComponent.Target.accesssory location.
ARKit - using originFromAnchorTransform for AccessoryTrackingProvider.
They are not aligned at the same point.
As for the other options of Accessory.LocationName, using .aim is located at the tip of the controller and .grip is the same position as .gripSurface but with a different orientation.
I am wondering why there is not an option for Accessory.LocationName that actually matches the transform captured by ARKit?
Hello
RemoteDeviceIdentifier returns nil and therefore it crashes the HoverEffect sample project.
I have vision26 beta 2 on both devices
what the correct method of running this code sample ?
Hi,
we've developed an app for Vision Pro that utilises the GroupActivitites SDK to provide shared experiences for our users.
Remote Participation works great, but we can't get nearby sharing to work.
The behaviour we're observing:
User 1 engages share sheet from Volume, 2nd Vision Pro is visible.
User 1 starts nearby sharing
Session initialisation runs for approx. 30 seconds, then fails
Sometimes, the nearby participant doesn't show up at all after the initialisation has failed once.
As stated in the Configure your visionOS app for sharing with people nearby article, we didn't make any changes to our implementation to support nearby sharing.
Any help would be greatly appreciated.
Kind regards,
David
My ARViewContainer code is not working. I don't know how to debug the issue and I don't know or see where my results is going to. I need help to resolve this issue. please help debug. See code below: