I'm using python 3.9.6, tensorflow 2.20.0, tensorflow-metal 1.2.0, and when I try to run
import tensorflow as tf
It gives
Traceback (most recent call last):
File "/Users/haoduoyu/Code/demo.py", line 1, in <module>
import tensorflow as tf
File "/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow/__init__.py", line 438, in <module>
_ll.load_library(_plugin_dir)
File "/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library
py_tf.TF_LoadLibrary(lib)
tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Library not loaded: @rpath/_pywrap_tensorflow_internal.so
Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow-plugins/libmetal_plugin.dylib
Reason: tried: '/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file)
As long as I uninstall tensorflow-metal, nothing goes wrong. How can I fix this problem?
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
Hi,
I am developing an iOS application that utilizes Apple’s Foundation Models to perform certain summarization tasks. I would like to understand whether user data is transferred to Private Cloud Compute (PCC) in cases where the computation cannot be performed entirely on-device.
This information is critical for our internal security and compliance reviews. I would appreciate your clarification on this matter.
Thank you.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi everyone, I’m working on an iOS app that uses a Core ML model to run live image recognition. I’ve run into a persistent issue with the mlpackage not being turned into a swift class. This following error is in the code, and in carDetection.mlpackage, it says that model class has not been generated yet. The error in the code is as follows:
What I’ve tried:
Verified Target Membership is checked for carDetectionModel.mlpackage
Confirmed the file is listed under Copy Bundle Resources (and removed from Compile Sources)
Cleaned the build folder (Shift + Cmd + K) and rebuilt
Renamed and re-added the .mlpackage file
Restarted Xcode and re-added the file
Logged bundle contents at runtime, but the .mlpackage still doesn’t appear
The mlpackage is in Copy bundle resources, and is not in the compile sources. I just don't know why a swift class is not being generated for the mlpackage.
Could someone please give me some guidance on what to do to resolve this issue?
Sorry if my error is a bit naive, I'm pretty new to iOS app development
Topic:
Machine Learning & AI
SubTopic:
Core ML
Hi team,
I’m exploring the Model Context Protocol (MCP), which is used to connect LLMs/AI agents to external tools in a structured way. It's becoming a common standard for automation and agent workflows.
Before I go deeper, I want to confirm:
Does Apple currently provide any official MCP server, API surface, or SDK on iOS/macOS?
From what I see, only third-party MCP servers exist for iOS simulators/devices, and Apple’s own frameworks (Foundation Models, Apple Intelligence) don’t expose MCP endpoints.
Is there any chance Apple might introduce MCP support—or publish recommended patterns for safely integrating MCP inside apps or developer tools?
I would like to see if I can share my app's data to the MCP server to enable other third-party apps/services to integrate easily
Topic:
Machine Learning & AI
SubTopic:
General
Apple's Image Playground primarily performs image generation on-device, but can use secure Private Cloud Compute for more complex requests that require larger models. Private Cloud Compute (PCC)
For more complex tasks that require greater computational power than the device can provide, Image Playground leverages Apple's Private Cloud Compute. This system extends the privacy and security of the device to the cloud:
Secure Environment: PCC runs on Apple silicon servers and uses a secure enclave to protect data, ensuring requests are processed in a verified, secure environment.
No Data Storage: Data is never stored or made accessible to Apple when using PCC; it is used only to fulfill the specific request.
Independent Verification: Independent experts are able to inspect the code running on these servers to verify Apple's privacy promises.
Hello,
I am interested in using jax-metal to train ML models using Apple Silicon. I understand this is experimental.
After installing jax-metal according to https://developer.apple.com/metal/jax/, my python code fails with the following error
JaxRuntimeError: UNKNOWN: -:0:0: error: unknown attribute code: 22
-:0:0: note: in bytecode version 6 produced by: StableHLO_v1.12.1
My issue is identical to the one reported here https://github.com/jax-ml/jax/issues/26968#issuecomment-2733120325, and is fixed by pinning to jax-metal 0.1.1., jax 0.5.0 and jaxlib 0.5.0.
Thank you!
I have an app that stores lots of data that is of interest to the user. Analogies would be the Photos apps or the Health app.
I'm trying to use the Foundation Models framework to allow users to surface information they find interesting using natural language, for example, "Tell me about the widgets from yesterday" or "Tell me about the widgets for the last 3 days". Specifically, I'm trying to get a date range passed down to the Tool so that I can pull the relevant widgets from the database in the call function.
What is the right way to set up the Arguments to get at a date range?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I am running some experiments with WebGPU using the wgpu crate in rust. I have some Buffers already allocated in the GPU.
Is it possible to use those already existing buffers directly as inputs to a predict call in CoreML? I want to prevent gpu to cpu download time as much as possible.
Or are there any other ways to do something like this. Is this only possible using the latest Tensor object which came out with Metal 4 ?
:
Hello, I’m seeking clarification on whether Apple provides any framework or API that enables deep integration between Siri and advanced AI assistants (such as ChatGPT), including system-level functions like voice interaction, navigation, cross-platform syncing, and operational access similar to Siri’s own capabilities. If no such option exists today, I would appreciate guidance on the recommended path or approved third-party solutions for building a unified, voice-first experience across Apple’s ecosystem. Thank you for your time and insight.
Hello everyone,
I’m looking for guidance regarding my app review timeline, as things seem unusually delayed compared to previous submissions.
My iOS app was rejected on November 19th due to AI-related policy questions.
I immediately responded to the reviewer with detailed explanations covering:
Model used (Gemini Flash 2.0 / 2.5 Lite)
How the AI only generates neutral, non-directive reflective questions
How the system prevents any diagnosis, therapy-like behavior or recommendations
Crisis-handling limitations
Safety safeguards at generation and UI level
Internal red-team testing and results
Data retention, privacy, and non-use of data for model training
After sending the requested information, I resubmitted the build on November 19th at 14:40.
Since then:
November 20th (7:30) → Status changed to In Review.
November 21st, 22nd, 23rd, 24th, 25th → No movement, still In Review.
My open case on App Store Connect is still pending without updates.
Because of the previous rejection, I expected a short delay, but this is now 5 days total and 3 business days with no progress, which feels longer than usual for my past submissions.
I’m not sure whether:
My app is in a secondary review queue due to the AI-related rejection,
The reviewer is waiting for internal clarification,
Or if something is stuck and needs to be escalated.
I don’t want to resubmit a new build unless necessary, since that would restart the queue.
Could someone from the community (or Apple, if possible) confirm whether this waiting time is normal after an AI-policy rejection?
And is there anything I should do besides waiting — for example, contacting Developer Support again or requesting a follow-up?
Thank you very much for your help. I appreciate any insight from others who have experienced similar delays.
I am follwing this tutorial:
https://apple.github.io/coremltools/docs-guides/source/convert-a-torchvision-model-from-pytorch.html
I have obtained simialr result using the python code.
However when I view it in Xcode, the preview prediction percentage confidence is way off I suspect it is due the the output of the model, which is in percentage already and in Xcode it multiply 100 again leading to this result. Please give me any feedback to fix this, thank you.
Hi, guys. I'm writing about Apple Intelligence and I reached the point I have to explain App Intent Domains
https://developer.apple.com/documentation/AppIntents/app-intent-domains
but I noticed that there is a note explaining that these services are not available with Siri. I tried the example provided by Apple at
https://developer.apple.com/documentation/AppIntents/making-your-app-s-functionality-available-to-siri
and I can only make the intents work from the Shortcuts App, but not from Siri.
Is this correct. App Intent Domains are still not available with Siri?
Thanks
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Is foundation models matured enough to take input from the Apple Vision framework to generate responses? Something similar to what google's gemini does although in a much smaller scale and for a very specific niche.
Hi, I'm currently using Metal Performance Shaders Graph (MPSGraphExecutable) to run neural network inference operations as part of a metal rendering pipeline.
I also tried to profile the usage of neural engine when running inference using MPSGraphExecutable but the graph shows no sign of neural engine usage. However, when I used the coreML model inspection tool in xcode and run performance report, it was able to use ANE.
Does MPSGraphExecutable automatically utilize the Apple Neural Engine (ANE) when running inference operations, or does it only execute on GPU?
My model (Core ML Package) was converted from a pytouch model using coremltools with ML program type and support iOS17.0+.
Any insights or documentation references would be greatly appreciated!
I'm on Tahoe 26.1 / M3 Macbook Air. I'm using VNDetectFaceRectanglesRequest as properly as possible, as in the minimal command line program attached below. For some reason, I always get:
MLE5Engine is disabled through the configuration
printed. I couldn't find any notes on developer docs saying that VNDetectFaceRectanglesRequest can not use the Apple Neural Engine. I'm assuming there is something wrong with my code however I wasn't able to find any remarks from documentation where it might be. I wasn't able to find the above error message online either. I would appreciate your help a lot and thank you in advance.
The code below accesses the video from AVCaptureDevice.DeviceType.builtInWideAngleCamera. Currently it directly chooses the 0th format which has the largest resolution (Full HD on my M3 MBA) and "4:2:0" color "v" reduced color component spectrum encoding ("420v").
After accessing video, it performs a VNDetectFaceRectanglesRequest. It prints "VNDetectFaceRectanglesRequest completion Handler called" many times, then prints the error message above, then continues printing "VNDetectFaceRectanglesRequest completion Handler called" until the user quits it.
To run it in Xcode, File > New project > Mac command line tool. Pasting the code below, then click on the root file > Targets > Signing & Capabilities > Hardened Runtime > Resource Access > Camera.
A possible explanation could be that either Apple's internal CoreML code for this function works on GPU/CPU only or it doesn't accept 420v as supplied by the Macbook Air camera
import AVKit
import Vision
var videoDataOutput: AVCaptureVideoDataOutput = AVCaptureVideoDataOutput()
var detectionRequests: [VNDetectFaceRectanglesRequest]?
var videoDataOutputQueue: DispatchQueue = DispatchQueue(label: "queue")
class XYZ: /*NSViewController or NSObject*/NSObject, AVCaptureVideoDataOutputSampleBufferDelegate {
func viewDidLoad() {
//super.viewDidLoad()
let session = AVCaptureSession()
let inputDevice = try! self.configureFrontCamera(for: session)
self.configureVideoDataOutput(for: inputDevice.device, resolution: inputDevice.resolution, captureSession: session)
self.prepareVisionRequest()
session.startRunning()
}
fileprivate func highestResolution420Format(for device: AVCaptureDevice) -> (format: AVCaptureDevice.Format, resolution: CGSize)? {
let deviceFormat = device.formats[0]
print(deviceFormat)
let dims = CMVideoFormatDescriptionGetDimensions(deviceFormat.formatDescription)
let resolution = CGSize(width: CGFloat(dims.width), height: CGFloat(dims.height))
return (deviceFormat, resolution)
}
fileprivate func configureFrontCamera(for captureSession: AVCaptureSession) throws -> (device: AVCaptureDevice, resolution: CGSize) {
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInWideAngleCamera], mediaType: .video, position: AVCaptureDevice.Position.unspecified)
let device = deviceDiscoverySession.devices.first!
let deviceInput = try! AVCaptureDeviceInput(device: device)
captureSession.addInput(deviceInput)
let highestResolution = self.highestResolution420Format(for: device)!
try! device.lockForConfiguration()
device.activeFormat = highestResolution.format
device.unlockForConfiguration()
return (device, highestResolution.resolution)
}
fileprivate func configureVideoDataOutput(for inputDevice: AVCaptureDevice, resolution: CGSize, captureSession: AVCaptureSession) {
videoDataOutput.setSampleBufferDelegate(self, queue: videoDataOutputQueue)
captureSession.addOutput(videoDataOutput)
}
fileprivate func prepareVisionRequest() {
let faceDetectionRequest: VNDetectFaceRectanglesRequest = VNDetectFaceRectanglesRequest(completionHandler: { (request, error) in
print("VNDetectFaceRectanglesRequest completion Handler called")
})
// Start with detection
detectionRequests = [faceDetectionRequest]
}
// MARK: AVCaptureVideoDataOutputSampleBufferDelegate
// Handle delegate method callback on receiving a sample buffer.
public func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
var requestHandlerOptions: [VNImageOption: AnyObject] = [:]
let cameraIntrinsicData = CMGetAttachment(sampleBuffer, key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, attachmentModeOut: nil)
if cameraIntrinsicData != nil {
requestHandlerOptions[VNImageOption.cameraIntrinsics] = cameraIntrinsicData
}
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
// No tracking object detected, so perform initial detection
let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer,
orientation: CGImagePropertyOrientation.up, options: requestHandlerOptions)
try! imageRequestHandler.perform(detectionRequests!)
}
}
let X = XYZ()
X.viewDidLoad()
sleep(9999999)
Is there an API that allows iOS app developers to leverage Apple Foundation Models to authorize a user's Apple Intelligence extension, chatGPT login account?
I'm trying to provide a real-time question feature for chatGPT, a logged-in extension account, while leveraging Apple Intelligence's LLM. Is there an API that also affects the extension login account?
Hi all! Nice to meet you.,
I am planning to build an iOS application that can:
Capture an image using the camera or select one from the gallery.
Remove the background and keep only the detected main object.
Add a border (outline) around the detected object’s shape.
Apply an animation along that border (e.g., moving light or glowing effect).
Include a transition animation when removing the background — for example, breaking the background into pieces as it disappears.
The app Capword has a similar feature for object isolation, and I’d like to build something like that.
Could you please provide any guidance, frameworks, or sample code related to:
Object segmentation and background removal in Swift (Vision or Core ML).
Applying custom borders and shape animations around detected objects.
Recognizing the object name (e.g., “person”, “cat”, “car”) after segmentation.
Thank you very much for your support.
Best regards,
SINN SOKLYHOR
Hi,
I was using Foundation Models in my app, and suddenly it just stopped working from one moment to the next.
To double-check, I created a small test in Playgrounds, but I’m getting the exact same error there too.
#Playground {
let session = LanguageModelSession()
let prompt = "please answer a word"
do {
let response = try await session.respond(to: prompt)
} catch {
print("error is \(error)")
}
}
error is Error Domain=FoundationModels.LanguageModelSession.GenerationError Code=-1 "(null)" UserInfo={NSMultipleUnderlyingErrorsKey=(
"Error Domain=ModelManagerServices.ModelManagerError Code=1026 \"(null)\" UserInfo={NSMultipleUnderlyingErrorsKey=(\n)}"
)}
I’m no longer able to get any response from the framework anywhere, even in a fresh project. It's been 5 days.
Has anyone else experienced this issue or knows what could be causing it?
Thanks in advance!
Tahoe 26.2 beta 1, Xcode 26.1.1, iPhone Air simulator 26.1
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
My app used app intents. And when user said "Prüfung der Bluetooth Funktion", screen can show the whole words. But in my app, it only can get "Bluetooth Funktion". This behaviour only happened in German version. In English version, everything worked well.
Is anyone can support me? Why German version siri cut my words?
I have been able to train an adapter on Google's Colaboratory.
I am able to start a LanguageModelSession and load it with my adapter.
The problem is that after one simple prompt, the context window is 90% full.
If I start the session without the adapter, the same simple prompt consumes only 1% of the context window.
Has anyone encountered this? I asked Claude AI and it seems to think that my training script needs adjusting. Grok on the other hand is (wrongly, I tried) convinced that I just need to tweak some parameters of LanguageModelSession or SystemLanguageModel.
Thanks for any tips.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models