Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics
Posts under Graphics & Games topic

Post

Replies

Boosts

Views

Activity

PDFKit Page Rerender
I'm experiencing an issue with PDFKit where page.removeAnnotation(annotation) successfully removes the annotation from the page's data structure, but the PDFView no longer updates automatically to reflect the change visually. Issue Details: The annotation is removed (verified by checking page.annotations.count) The PDFView display doesn't refresh to show the removal This code was working correctly before and suddenly stopped working No code changes were made on my end
3
1
867
Dec ’25
Memory leak when no draw calls issued to encoder
I noticed that when the render command encoder adds no draw calls an apps memory usage seems to grow unboundedly. Using a super simple MTKView-based drawing with the following delegate (code at end). If I add the simplest of draw calls, e.g., a single vertex, the app's memory usage is normal, around 100-ish MBs. I am attaching a couple screenshot, one from Xcode and one from Instruments. What's going on here? Is this an illegal program? If yes, why does it not crash, such as if the encode or command buffer weren't ended. Or is there some race condition at play here due to the lack of draws? class Renderer: NSObject, MTKViewDelegate { var device: MTLDevice var commandQueue: MTL4CommandQueue var commandBuffer: MTL4CommandBuffer var allocator: MTL4CommandAllocator override init() { guard let d = MTLCreateSystemDefaultDevice(), let queue = d.makeMTL4CommandQueue(), let cmdBuffer = d.makeCommandBuffer(), let alloc = d.makeCommandAllocator() else { fatalError("unable to create metal 4 objects") } self.device = d self.commandQueue = queue self.commandBuffer = cmdBuffer self.allocator = alloc super.init() } func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {} func draw(in view: MTKView) { guard let drawable = view.currentDrawable else { return } commandBuffer.beginCommandBuffer(allocator: allocator) guard let descriptor = view.currentMTL4RenderPassDescriptor, let encoder = commandBuffer.makeRenderCommandEncoder( descriptor: descriptor ) else { fatalError("unable to create encoder") } encoder.endEncoding() commandBuffer.endCommandBuffer() commandQueue.waitForDrawable(drawable) commandQueue.commit([commandBuffer]) commandQueue.signalDrawable(drawable) drawable.present() } }
3
0
407
Jan ’26
Bug Report - Incorrect trackingAreaIdentifier in visionOS 26 Hover Effect Sample Code
Description: In the official visionOS 26 Hover Effect sample code project , I encountered an issue where the event.trackingAreaIdentifier returned by onSpatialEvent does not reset as expected. Steps to Reproduce: Select an object with trackingAreaID = 6 in the sample app. Look at a blank space (outside any tracking area) and perform a pinch gesture . Expected Behavior: The event.trackingAreaIdentifier should return 0 when interacting with a non-tracking area. Actual Behavior: The event.trackingAreaIdentifier still returns 6, even after restarting the app or killing the process. This persists regardless of where the pinch gesture is performed
3
0
288
Jul ’25
What good is NSBitmapFormatAlphaNonpremultiplied?
If I create a bitmap image and then try to get ready to draw into it, like so: NSBitmapImageRep* newRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes: nullptr pixelsWide: 128 pixelsHigh: 128 bitsPerSample: 8 samplesPerPixel: 4 hasAlpha: YES isPlanar: NO colorSpaceName: NSDeviceRGBColorSpace bitmapFormat: NSBitmapFormatAlphaNonpremultiplied | NSBitmapFormatThirtyTwoBitBigEndian bytesPerRow: 4 * 128 bitsPerPixel: 32]; [NSGraphicsContext setCurrentContext: [NSGraphicsContext graphicsContextWithBitmapImageRep: newRep]]; then the log shows this error: CGBitmapContextCreate: unsupported parameter combination: RGB 8 bits/component, integer 512 bytes/row kCGImageAlphaLast kCGImageByteOrderDefault kCGImagePixelFormatPacked Valid parameters for RGB color space model are: 16 bits per pixel, 5 bits per component, kCGImageAlphaNoneSkipFirst 32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipFirst 32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipLast 32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedFirst 32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedLast 32 bits per pixel, 10 bits per component, kCGImageAlphaNone|kCGImagePixelFormatRGBCIF10|kCGImageByteOrder16Little 64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast 64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast 64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little 64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little 128 bits per pixel, 32 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents 128 bits per pixel, 32 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents See Quartz 2D Programming Guide (available online) for more information. If I don't use NSBitmapFormatAlphaNonpremultiplied as part of the format, I don't get the error message. My question is, why does the constant NSBitmapFormatAlphaNonpremultiplied exist if you can't use it like this? If you're wondering why I wanted to do this: I want to extract the RGBA pixel data from an image, which might have non-premultiplied alpha. And elsewhere online, I saw advice that if you want to look at the pixels of an image, draw it into a bitmap whose format you know and look at those pixels. And I don't want the process of drawing to premultiply my alpha.
3
0
167
Jun ’25
CAMetalLayer nextDrawable crash
Hi , My application meet below crash backtrace at very low repro rate from the public users, i do not see it relate to a specific iOS version or iPhone model. The last code line from my application is calling CAMetalLayer nextDrawable API. I did some basic studying, suppose it may relate to the wrong CAMetaLayer configuration, like frame property w or h <= 0.0 bounds property w or h <= 0.0 drawableSize w or h <= 0.0 or w or h > max value (like 16384) Not sure my above thinking is right or not? Will the UIView which my CAMetaLayer attached will cause such nextDrawable crash or not ? Thanks a lot Main Thread - Crashed libsystem_kernel.dylib __pthread_kill libsystem_c.dylib abort libsystem_c.dylib __assert_rtn Metal MTLReportFailure.cold.1 Metal MTLReportFailure Metal _MTLMessageContextEnd Metal -[MTLTextureDescriptorInternal validateWithDevice:] AGXMetalA13 0x245b1a000 + 4522096 QuartzCore allocate_drawable_texture(id<MTLDevice>, __IOSurface*, unsigned int, unsigned int, MTLPixelFormat, unsigned long long, CAMetalLayerRotation, bool, NSString*, unsigned long) QuartzCore get_unused_drawable(_CAMetalLayerPrivate*, CAMetalLayerRotation, bool, bool) QuartzCore CAMetalLayerPrivateNextDrawableLocked(CAMetalLayer*, CAMetalDrawable**, unsigned long*) QuartzCore -[CAMetalLayer nextDrawable] SpaceApp -[MetalRender renderFrame:] MetalRenderer.mm:167 SpaceApp -[FrameBuffer acceptFrame:] VideoRender.mm:173 QuartzCore CA::Display::DisplayLinkItem::dispatch_(CA::SignPost::Interval<(CA::SignPost::CAEventCode)835322056>&) QuartzCore CA::Display::DisplayLink::dispatch_items(unsigned long long, unsigned long long, unsigned long long) QuartzCore CA::Display::DisplayLink::dispatch_deferred_display_links(unsigned int) UIKitCore _UIUpdateSequenceRun UIKitCore schedulerStepScheduledMainSection UIKitCore runloopSourceCallback CoreFoundation __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ CoreFoundation __CFRunLoopDoSource0 CoreFoundation __CFRunLoopDoSources0 CoreFoundation __CFRunLoopRun CoreFoundation CFRunLoopRunSpecific GraphicsServices GSEventRunModal UIKitCore -[UIApplication _run] UIKitCore UIApplicationMain
3
0
371
Jul ’25
RealityKit animation with bindTarget: .opacity doesn't work
I want to fade objects in and out, and while setting an entity's OpacityComponent works, animating it doesn't seem to do anything. In the following code the second sphere should fade out, but it keeps its initial opacity. On the other hand, the animation that changes its transform works. What am I doing wrong? class ViewController: NSViewController { override func loadView() { let arView = ARView(frame: NSScreen.main!.frame) let anchor = AnchorEntity(.world(transform: matrix_identity_float4x4)) arView.scene.addAnchor(anchor) let sphere = ModelEntity(mesh: .generateSphere(radius: 0.5)) anchor.addChild(sphere) sphere.components.set(OpacityComponent(opacity: 0.1)) let sphere2 = ModelEntity(mesh: .generateSphere(radius: 0.5)) sphere2.position = .init(x: 0.2, y: 0, z: 0) anchor.addChild(sphere2) sphere2.components.set(OpacityComponent(opacity: 0.1)) sphere.playAnimation(try! AnimationResource.makeActionAnimation(for: FromToByAction(to: 0, timing: .linear), duration: 1, bindTarget: .opacity)) sphere.playAnimation(try! AnimationResource.makeActionAnimation(for: FromToByAction(to: Transform(translation: SIMD3(x: 0.1, y: 0, z: 0)), timing: .linear), duration: 1, bindTarget: .transform)) view = arView } }
3
0
203
23h
Xcode Playground - The LLDB RPC server has crashed.
I am trying to learn Metal development on my MacBook Pro M1 Pro (Sequoia 15.3.1) on Xcode Playground, but when I write these two lines of code: import Metal let device = MTLCreateSystemDefaultDevice()! I get the error The LLDB RPC server has crashed. Any ideas as to what I can do to solve this? I have rebooted the machine and reinstalled Xcode...
3
0
530
Mar ’25
Request: Allow Game Mode to be enabled locally for non-game App Store categories
Hi Apple team, Game Mode was introduced in iOS 18. To activate Game Mode, an app must include specific key-value pairs in its *.plist and be categorized as a "Game" on the App Store. My app (https://apps.apple.com/us/app/voidlink/id6747717070) works primarily as a self-hosted game streaming (PC->iPhone/iPad) client. Game Mode provides clear benefits in terms of latency and frame rate stability, but it can currently only be activated when running via Xcode or TestFlight. I am an individual iOS developer based in China, where an additional government license is required for apps to be listed under the "Game" category on the App Store. Obtaining such a license is very difficult for independent developers, so my app has been categorized under "Utilities" instead.(If move the app to game category, it will disappear from Chinese App Store immediately) Expectation / Suggestion: Please consider making Game Mode available as a local, user-controllable option on iOS18/26+, such as through a system “App Pool” where users can choose which apps to enable Game Mode for, regardless of App Store category. This would greatly benefit use cases like streaming clients, benchmarking tools, and remote play utilities, without requiring developers to reclassify their apps as “Games” on App Store.
3
1
765
Nov ’25
Unable to find intelgpu_kbl_gt2r0 slice or a compatible one in binary archive
Unable to find intelgpu_kbl_gt2r0 slice or a compatible one in binary archive 'file:///System/Library/PrivateFrameworks/IconRendering.framework/Resources/binary.metallib' available slices: applegpu_g13g, applegpu_g13s, applegpu_g13d, applegpu_g14g, applegpu_g14s, applegpu_g14d, applegpu_g15g, applegpu_g15s, applegpu_g15d, applegpu_g16g, applegpu_g16s, applegpu_g17g, applegpu_g15g, applegpu_g15s, applegpu_g15d, applegpu_g16s Is it related to performance of applications in macOS 26.2 on Intel Macs?
3
0
261
3w
iOS Metal system delayed one Vsync period to really display the frame on the screen
View Layout Add the following views in a view controller: Label View A, with a subview of the same size: MTKView A View B, with a subview of the same size: MTKView B Refresh Rates of Each View The label view refreshes at 60fps (driven by CADisplayLink). MTKView A and B refresh at 15fps. MTKView Implementation Details The corresponding CAMetalLayer's maximumDrawableCount is set to 2, changed to double buffering. The scheduling mechanism is modified; drawing is not driven by the internal loop but is done manually. The draw call is triggered immediately upon receiving a frame. self.metalView.enableSetNeedsDisplay = NO; self.metalView.paused = YES; A new high-priority queue is created for drawing, instead of handling it on the main queue. MTKView Latency Tracking The GPU completion time T1 is observed through the addCompletedHandler callback of the CommandBuffer. The presentation time T2 of the frame is observed through the addPresentedHandler callback of the currentDrawable in MTKView. Testing shows that T2 - T1 > 16.6ms (the Vsync period at 60Hz). This means that after the GPU rendering in MTLView is finished, the frame is not actually displayed at the next Vsync instruction but only at the Vsync instruction after that. I believe there is an extra 16.6ms of latency here, which I want to eliminate by adjusting the rendering mechanism. Observation from Instruments From Instruments, the Surface presentation aligns with the above test results. After the Metal encoder finishes, the Surface in Display switches only after the next-next Vsync instruction. See the image in the link for details. Questions According to a beginner's understanding, after MTKView's GPU rendering is finished, the next Vsync instruction should officially display (make it visible). However, this is not what is observed. Does the subview MTKView need to wait for another Vsync cycle to be drawn to the actual display buffer? The label updates its text at 60fps, so the entire interface should be displayed at 60fps. Is the content of MTKView not synchronized when the display happens? Explanation of the Reasoning Behind Some MTKView Code Details Changing from the default triple buffering to double buffering helps reduce the latency introduced by rendering. Not using MTKView's own scheduling mechanism but using manual triggering of the draw method is because MTKView's own scheduling mechanism is driven by CADisplayLink. Therefore, if a frame falls within a Vsync window, it needs to wait for the next Vsync window to trigger the draw operation, which introduces waiting latency.
3
0
587
Dec ’25
GestureComponent does not support DragGesture
The following code using the new GestureComponent demonstrates inconsistency. The tap gesture prints output, but the drag gesture does not. I already checked this post, which points to this seemingly outdated sample code I assume that example is deprecated in favour of the now built in version of GestureComponent. Nonetheless, there are no compiler warnings or errors, it just fails silently. TapGesture, LongPressGesture, MagnifyGesture, RotateGesture all work, so this feels like an oversight. RealityView { content in let testEntity = ModelEntity(mesh: .generateBox(size: .init(x: 1, y: 1, z: 1))) testEntity.position = SIMD3<Float>(0,0,-1) testEntity.components.set(InputTargetComponent()) testEntity.components.set(CollisionComponent( shapes: [.generateBox(size: .init(x: 1, y: 1, z: 1))] )) let testGesture = TapGesture() .onEnded { value in print("Tapped") } testEntity.components.set(GestureComponent(testGesture)) let dragGesture = DragGesture() .onEnded { value in print("Dragged") } testEntity.components.set(GestureComponent(dragGesture)) content.add(testEntity) }
3
1
431
Jul ’25
JPEG2000 (JP2) Decoding Works on iOS 16 but Fails on iOS 18
I am extracting a JPEG2000 (JP2) facial image from an NFC passport chip (ISO/IEC 19794-5) and attempting to create a UIImage from it. On iOS 16, the following code works fine: import ImageIO import UIKit func getUIImage(from imageData: [UInt8]) -&gt; UIImage? { let data = Data(imageData) guard let imageSource = CGImageSourceCreateWithData(data as CFData, nil), let cgImage = CGImageSourceCreateImageAtIndex(imageSource, 0, nil) else { print("Failed to decode JP2 image!") return nil } return UIImage(cgImage: cgImage) } However, on iOS 18, this fails with errors like: initialize:1415: *** invalid JPEG2000 file *** makeImagePlus:3752: *** ERROR: 'JP2 ' - failed to create image [-50] CGImageSourceCreateImageAtIndex: *** ERROR: failed to create image [-59] Questions: Did Apple remove or modify JPEG2000 support in iOS 18? Is there an official workaround for decoding JPEG2000 on iOS 18? Should I use Vision/Metal/Core Image instead? Is there a recommended way to convert JPEG2000 to JPEG/PNG before creating a UIImage? Are there any Apple-provided APIs that maintain backward compatibility for JPEG2000 decoding? Additional Info: The UInt8 array has a valid JPEG2000 header (0x00 0x00 0x00 0x0C 6A 50 ...). The image works on iOS 16 but fails on iOS 18. Tested on iPhone running iOS 18.0 beta. Any insights on how to handle JPEG2000 decoding in iOS 18 would be greatly appreciated! 🚀
3
0
428
Mar ’25
CIBumpDistortion filter not working on my view
I'm trying to apply a CIBumpDistortion Core Image filter to a view that contains a UILabel (my storyLabel). The goal is to create a visual bump/magnifying glass effect over the text. However, despite my attempts, the filter doesn't seem to render at all. The view and the label appear as normal, with no distortion effect. I've tried adjusting the filter parameters and reviewing the view hierarchy, but without success. I also haven't been able to find clear documentation or examples for applying this filter to a UIView's layer. // // TVView.swift // Mistery // // Created by Joje on 31/07/25. // import CoreImage import CoreImage.CIFilterBuiltins import UIKit import AVFoundation final class TVView: UIView { // propriedades animacao texto private var textAnimationTimer: Timer? private var fullTextToAnimate: String = "" private var currentCharIndex: Int = 0 // propriedades video estatica private var player: AVQueuePlayer? private var playerLayer: AVPlayerLayer? private var playerLooper: AVPlayerLooper? var onNextButtonTap: () -> Void = {} // MARK: - Subviews // imagem da TV private(set) lazy var tvImageView: UIImageView = { let imageView = UIImageView() imageView.translatesAutoresizingMaskIntoConstraints = false imageView.image = UIImage(named: "tvFinal") imageView.contentMode = .scaleAspectFit return imageView }() // texto que passa dentro da TV private(set) lazy var storyLabel: UILabel = { let label = UILabel() label.translatesAutoresizingMaskIntoConstraints = false //label.backgroundColor = .gray label.textColor = .red label.font = UIFont(name: "MeltedMonster", size: 30) label.textAlignment = .left label.numberOfLines = 0 label.text = "" return label }() private(set) lazy var nextButton: UIButton = { let button = UIButton(type: .system) button.translatesAutoresizingMaskIntoConstraints = false //button.backgroundColor = .darkGray button.addTarget(self, action: #selector(didPressNextButton), for: .touchUpInside) return button }() // MARK: - Lifecycle override init(frame: CGRect) { super.init(frame: frame) backgroundColor = .black setupVideoPlayer() addSubviews() setupConstraints() } override func layoutSubviews() { super.layoutSubviews() playerLayer?.frame = tvImageView.frame.insetBy(dx: tvImageView.frame.width * 0.05, dy: tvImageView.frame.height * 0.18) setupFisheyeEffect() } private func setupFisheyeEffect() { // cria o filtro guard let filter = CIFilter(name: "CIBumpDistortion") else {return print("erro")} storyLabel.layer.shouldRasterize = true storyLabel.layer.rasterizationScale = UIScreen.main.scale // define os parametros filter.setDefaults() // centro do efeito let center = CIVector(x: storyLabel.bounds.midX, y: storyLabel.bounds.midY) filter.setValue(center, forKey: kCIInputCenterKey) // raio de distorção filter.setValue(storyLabel.bounds.width, forKey: kCIInputRadiusKey) // intensidade de distorção filter.setValue(7, forKey: kCIInputScaleKey) storyLabel.layer.filters = [filter] } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } // MARK: - Button actions @objc private func didPressNextButton() { onNextButtonTap() } @objc private func animateNextCharacter() { guard currentCharIndex < fullTextToAnimate.count else { textAnimationTimer?.invalidate() return } let currentTextIndex = fullTextToAnimate.index(fullTextToAnimate.startIndex, offsetBy: currentCharIndex) let partialText = String(fullTextToAnimate[...currentTextIndex]) storyLabel.text = partialText currentCharIndex += 1 } public func updateStoryText(with text: String) { textAnimationTimer?.invalidate() storyLabel.text = "" fullTextToAnimate = text currentCharIndex = 0 textAnimationTimer = Timer.scheduledTimer(timeInterval: 0.12, target: self, selector: #selector(animateNextCharacter), userInfo: nil, repeats: true) } // MARK: - Setup methods private func setupVideoPlayer() { guard let videoURL = Bundle.main.url(forResource: "static-video", withExtension: "mov") else { print("Erro: Não foi possível encontrar o arquivo de vídeo static-video.mov") return } let playerItem = AVPlayerItem(url: videoURL) player = AVQueuePlayer(playerItem: playerItem) // LINHA COM POSSIVEL ERRO playerLooper = AVPlayerLooper(player: player!, templateItem: playerItem) playerLayer = AVPlayerLayer(player: player) playerLayer?.videoGravity = .resizeAspectFill if let layer = playerLayer { self.layer.addSublayer(layer) } player?.play() } private func addSubviews() { self.addSubview(storyLabel) self.addSubview(tvImageView) self.addSubview(nextButton) } private func setupConstraints() { NSLayoutConstraint.activate([ // TV Image tvImageView.centerXAnchor.constraint(equalTo: centerXAnchor), tvImageView.centerYAnchor.constraint(equalTo: centerYAnchor), tvImageView.widthAnchor.constraint(equalTo: widthAnchor), // TV Text storyLabel.centerXAnchor.constraint(equalTo: tvImageView.centerXAnchor, constant: -50), storyLabel.centerYAnchor.constraint(equalTo: tvImageView.centerYAnchor, constant: -25), storyLabel.widthAnchor.constraint(equalTo: tvImageView.widthAnchor, multiplier: 0.35), storyLabel.heightAnchor.constraint(equalTo: tvImageView.heightAnchor, multiplier: 0.42), //TV Button nextButton.topAnchor.constraint(equalTo: tvImageView.centerYAnchor, constant: -25), nextButton.centerXAnchor.constraint(equalTo: self.centerXAnchor, constant: 190), nextButton.widthAnchor.constraint(equalToConstant: 100), nextButton.heightAnchor.constraint(equalToConstant: 160) ]) } } #Preview{ ViewController() }
3
0
185
Sep ’25
CGSetDisplayTransferByTable no longer working on macOS Tahoe
For an app of mine I use CGSetDisplayTransferByTable to adjust the gamma table of the device. Since macOS Tahoe, these modifications are silently ignored. The display's actual gamma curve remains unchanged despite the API reporting successful completion. I've filed a FB for it a few weeks ago, and would love to figure out what could be causing this. FB18559786
3
1
451
Aug ’25
Core Text incremental redraw glitch: overlapping glyphs during editing
During editing in Pages (or Word) I am getting these glitches (see attachment). Started after the last update to Mac OS 26.3 (beta) Also removed 2 recent instalments (Blackhole audio driver and kDrive/Infomaniak, but trouble is still there. 27" iMac 2020 (Intel) i7 3,8 Ghz AMD Radeon Pro 5500 XT 8 GB 24 GB RAM macOS Tahoe 26.3 (=beta) Tried restart in safe mode, checked fonts. Talked to aissistent to get a solution, but no ...) Thx for any advice, Pieter (not a developer so please kee pit simple 🙏🏻)
3
0
429
4w
Scenekit view and scenekit editor color difference
Hello there, I'm having trouble matching what I see in the scenekit editor and the output of the resulting scene in a scnview. For a glitter effect I have set a high value on the diffuse intensity which looks fine in the editor but when running the game the colors are much darker. To see if the intensity value is merely capped I have set the same multiplier on the hat below - but it is blown out which looks to me like there is some grading going on I have tried to switch on hdr rendering but that didn't make a difference. I tried disabling linear rendering and that simply made everything darker still - which I expect. Does someone have an idea what else this could be? What rendering is the scenekit editor using and how can I match it? Interestingly when I take a screenshot of the editor window for this post, the image is also blown out... what is going on? :) Thanks so much for any pointers, Seb
3
0
227
Apr ’25