Hello, developers!
Xcode 26.3 unlocks the power of agentic coding!
Get ready to explore new features that released last Tuesday. If you haven't already, you can download Xcode 26.3 from the Apple Developer Downloads page
Check out the official documentation for getting started and how to configure agents in Xcode:
Writing code with intelligence in Xcode
Generate code, fix bugs fast, and learn as you go with intelligence built directly into Xcode
https://developer.apple.com/documentation/xcode/writing-code-with-intelligence-in-xcode/
Setting up coding intelligence
Enable third-party coding tools that you want to use in the coding assistant
https://developer.apple.com/documentation/xcode/setting-up-coding-intelligence
Giving external agentic coding tools access to Xcode
Let agentic coding tools access your project and Xcode capabilities using the Model Context Protocol
https://developer.apple.com/documentation/xcode/giving-agentic-coding-tools-access-to-xcode
Code-along: Experiment with coding intelligence in Xcode 26
Presented live at a Meet with Apple event: Learn how coding intelligence features can help you write code and fix errors with example prompts and resources.
We are also introducing a new tag to the forums Coding intelligence ✨. Use this tag when discussing these new features to increase visibility on the forums to help others chime in.
Coding intelligence
RSS for tagEnhance your development workflow with coding intelligence features that help you write code, generate tests and documentation, fix errors, refactor existing implementations, and navigate codebases.
Posts under Coding intelligence tag
14 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I am using official MCP SDK. According to official guide, Servers MUST provide structured results that conform to this schema. https://modelcontextprotocol.io/specification/draft/server/tools#output-schema
I could see output schema defined, but result have no structured content.
Current output schema:
{
name: "XcodeListWindows",
title: "List Windows",
description: "Lists the current Xcode windows and their workspace information",
inputSchema: {
type: "object",
properties: {
},
required: [
],
},
outputSchema: {
type: "object",
properties: {
message: {
description: "Description of all open Xcode windows",
type: "string",
},
},
required: [
"message",
],
},
}
Current response:
{
"result": {
"content": [
{
type: "text",
text: "{\"message\":\"* tabIdentifier: windowtab1, workspacePath: \\xxx\\n* tabIdentifier: windowtab2, workspacePath: \\xxx\\n\"}",
},
]
}
}
Expected:
{
"result": {
"content": [
{
type: "text",
text: "{\"message\":\"* tabIdentifier: windowtab1, workspacePath: \\xxx\\n* tabIdentifier: windowtab2, workspacePath: \\xxx\\n\"}",
},
],
"structuredContent": {
"message": "* tabIdentifier: windowtab1, workspacePath: \\xxx\\n* tabIdentifier: windowtab2, workspacePath: \\xxx\\n",
}
}
}
Topic:
Developer Tools & Services
SubTopic:
Xcode
Tags:
Xcode
Apple Intelligence
Coding intelligence
I am looking to optimize my AI-assisted workflow within Xcode.
Previously, my process was inefficient:
Manually selecting and copying code snippets from Xcode into Gemini.
Asking a specific question (e.g., "Modify this to show an alertError message box").
Copying the result back into Xcode.
I attempted to switch to the new native Intelligence feature in Xcode to streamline this, but I found significant shortcomings:
Latency: The response time is noticeably slow. Much slower than asking directly on Gemini 3 Pro.
Lack of Context: The AI often fails to grasp the full project context. For example, it frequently claims it cannot see the code for ScannerView even though it is part of the project. I often have to prompt it multiple times before it finally "finds" the file.
Is Xcode's Intelligence feature actually production-ready yet?
If not, what tools do you recommend that integrate well with iOS development?
To be clear, I am not looking for "vibe coding." I have a clear grasp of the problem and the high-level solution. My goal is to delegate the low-level implementation to the AI. I need a tool that has full project context from the start, eliminating the need to manually copy-paste snippets into a chat window.
Steps to reproduce:
Open Xcode 26.3 → Settings → Intelligence → Claude sign-in
Click the sign-in button — spinner begins, never completes
An email the arrives with a magic link.
The magic link opened a browser page which displayed a 6-digit verification code with instructions reading "enter this verification code where you first tried to sign in" — i.e. back in Xcode. However, Xcode was showing only an endless spinner with no code entry field anywhere in the UI. This is the core bug.
I did since manage to complete authentication sign-in through a second browser verification field that eventually appeared after about 10 minutes and did get signed in, but the Claude Intelligence agent still returns "Your request could not be completed" even after successful sign-in and a full Xcode restart.
Prior to this bug starting at 10 am on February 19 I had been using the intelligence agent successfully for about a week. Anthropic did have some sort of event on their system around February 18/19 so maybe this has been a result of that.
I have notified Anthropic support and Apple Feedback Assistant.
Does anybody have a workaround until either anthropic or Apple get back to me?
Environment:
Xcode 26.3 RC
macOS 26.3 (25D125) (Apple Silicon / arm64)
Setup:
Xcode > Settings > Intelligence > Claude Agent: Signed In (account status shows "Signed In")
Model: Default
Steps to reproduce:
Open a new chat in Xcode's Coding Assistant
Select "Claude Agent" from the agent dropdown (instead of "Claude Sonnet 4.5")
Send any message (e.g. "HI")
Expected result:
Claude Agent responds normally.
Actual result:
The message is sent but immediately returns the error:
"Your request couldn't be completed."
🚩I have an active Claude Code subscription with remaining usage. Running claude in Terminal works perfectly, confirming the subscription and quota are valid.
I work on some proprietary codebases and can only use private AI services with them (currently MiniMax M2.1 and GLM 4.7). It all works great with both Claude Code and OpenCode agents, and I'd like to leverage the new agentic capabilities that are now in Xcode 26.3.
I'm not seeing any option to connect to OpenCode, and both the Anthropic and OpenAI providers require an enterprise account (which I don't have access to).
Are there any options that I'm missing here?
Topic:
Developer Tools & Services
SubTopic:
Xcode
Tags:
Developer Tools
Apple Intelligence
Coding intelligence
Hi,
I'm a novice on Xcode and I have just created an API Key in order to use Perplexity as a model Provider in Xcode 26.4 (beta).
The key is valid and, Perplexity support replies this:
_The issue is that Xcode expects specific endpoint structures that don't fully match Perplexity's API. Xcode requires a
/v1/models
endpoint to list available models, which Perplexity doesn't currently provide in that format. Our API uses
/chat/completions
rather than
/v1/chat/completions
as well._
Is there a workaround to integrate it and get the benefits of new Intelligence features with Perplexity?
Error code is "Provider is not valid - Models could not be fetched with the provided account details"
Thanks for your help
In the Coding Intelligence feature introduced in Xcode 26, when I send a message using ChatGPT in Xcode, the message “Your request couldn't be completed. Networking error.” appears and I’m unable to use the feature.
I suspect the issue may be related to the VPN or network proxy connected to my Mac and am attempting to investigate.
However, Xcode does not display any specific error details, nor does it provide a way to view them, which makes a detailed investigation difficult.
Next to the error message, there is a feedback button rather than a stethoscope (🩺) button, and the feedback window does not provide access to the underlying error information.
Is there a way to view the detailed network error logs generated by ChatGPT in Xcode? (I am using Xcode 26.0.1.)
This was maddening: I had written a very long and detailed prompt, then went to select "Codex" from the drop down menu at the top "start a new conversation", and it wiped out the entire prompt I was working on.
Well, of course it did: that drop down is part of the "Start a new Conversation" icon button, but it's position is too far away from the actual icon to infer that is its purpose. It should be a popup that shows the current choice for model (GPT-5, GPT 4.1, Codex).
ANOTHER lost prompt problem. If you have a prompt-in-progress (AI is maybe building some code), and Xcode crashes, it might not save that prompt for when you re-launch.
And how might that happen? Well, there's still numerous ways using the coding assistanty might crash. One I've notice is if you switch out to another app, or if you invoke a screen capture, it might crash.
Been playing with Claude Code in Xcode 26.3 as a code assistant and discovered it cannot create files in nested folders, only repo root. Letting Claude troubleshoot itself, it appears that str_replace_based_edit_tool strips forward slashes from the path variable. This is what Claude claims to send:
<invoke name="str_replace_based_edit_tool">
<parameter name="command">create</parameter>
<parameter name="path">/repo/Test App/EntityTestsAttempt3.swift</parameter>
<parameter name="file_text">
// ... file contents ...
</parameter>
</invoke>
And the file that gets created is:
Test AppEntityTestsAttempt3.swift
Anyone else run into this before or can reproduce it?
We received many great questions from the community during Code-along: Experiment with coding intelligence in Xcode 26. Here are the highlights from the questions submitted by the audience during the event.
What models does coding intelligence features support In Xcode?
Xcode integrates directly with ChatGPT and Claude user accounts. You can also configure Xcode to integrate any model provider that supports the Chat Completions API, such as models that you access with an API key. You can also download and run a local model on a Mac with Apple silicon. Setting up coding intelligence provides all of the information you need to get started with Xcode’s direct integration with ChatGPT and Claude, as well as how to set up Xcode to access other providers.
Does Coding Intelligence have access to Apple API and developer documentation? How does it stay up to date with the latest SwiftUI API?
Coding agents are great because they talk to a model, generate code and fix errors, but they also have access to tools, which make a significant difference in their capabilities. Xcode provides tools for the agent to use, including the ability to search Apple’s documentation and, code snippets. As an example, you can ask for a new API that was released in iOS, and if the model doesn't have this knowledge, the agent will call the tool to search for the documentation and bring that context into the conversation.
As an organization, we do not have permission to share our codebase with any AI model due to security reasons. If we enable coding intelligence and give it access to our codebase, will the code be shared with Apple? Will OpenAI or Anthropic have access to my entire project?
Privacy is fundamental to our design. When you connect an AI subscription account (like an account with OpenAI or Anthropic), the connection is only between you and that service. Apple does not act as an intermediary, and never sees the code sent to these services. Because this interaction happens directly between the you and your provider, the security and privacy of your code is determined entirely by your existing agreement with your provider.
What exactly does the agent have access to? Only the files in the scope of the project?
By default, the agent works in the directory of your project, and can access all files in that directory. This includes code file, assets, and your Xcode project configuration file.
Does coding intelligence remember previous conversations?
Each time you start a new conversation window, it resets the context of the conversation, but you can always go back to a previous conversation you had and continue and iterate on a previous idea. If a result contains errors or did not go in a direction you are happy with, you can use the history to go back to any point in time. Writing code with intelligence in Xcode goes into detail on how you can use these features.
For command-line tools (Claude Code, OpenAI Codex), I can create guidelines as Markdown with rules or project descriptions. Does Xcode Agentic Coding take these into account, or should I define them differently?
You can add skills that you’ve created, hints about Xcode and your project to configuration files, and other files supporting your use of coding intelligence such as AGENTS.md or CLAUDE.md files, to the respective Codex and Claude Agent directories that Xcode uses exclusively:
~/Library/Developer/Xcode/CodingAssistant/codex
~/Library/Developer/Xcode/CodingAssistant/ClaudeAgentConfig
For more information on configuring agentic coding tools that run inside Xcode, see Customize the Codex and Claude Agent environments.
How do you add a Model Context Protocol server for the Xcode agent to be able to access?
You can add additional Model Context Protocol (MCP) servers in Xcode with product-specific configuration files so the agents can use those MCP servers from within Xcode. Locate the files needed to configure those additional MCP servers in the same directory where you set other customizations for Codex or Claude Agent, listed above.
Under Xcode -> Settings -> Intelligence -> Account it states "Not Signed In" An anthropic api key or claude.ai account is N/A
Is pretty sparse in actual details.
https://developer.apple.com/documentation/xcode/setting-up-coding-intelligence
Related:
https://developer.apple.com/forums/thread/814595
I’m trying to use MCP servers with Xcode 26.3 Coding Intelligence (Codex agent). With a project-scoped config file at /.codex/config.toml, MCP servers are not reliably loaded.
/.codex/config.toml
Example:
[mcp_servers.Notion]
url = "https://mcp.notion.com/mcp"
enabled = true
[mcp_servers.XcodeBuildMCP]
command = "/bin/zsh"
args = ["-lc", "/opt/homebrew/bin/npx -y xcodebuildmcp@beta mcp"]
enabled = true
tool_timeout_sec = 10000
Expected:
Xcode consistently loads MCP servers defined in /.codex/config.toml across restarts.
Actual:
Xcode often only exposes xcode-tools. In some sessions MCP servers appear, but after restarting Xcode they may disappear. The global file at ~/Library/Developer/Xcode/CodingAssistant/codex/config.toml also seems managed/rewritten by Xcode and isn’t reliable for custom MCP servers.
Questions
Is /.codex/config.toml the official/supported way to configure MCP servers for Codex in Xcode right now?
Are there any requirements for Xcode to load it (e.g. workspace must be Trusted, open .xcworkspace vs .xcodeproj, full restart/force quit, etc.)?
Is there any logging/diagnostics to understand why the MCP server is not starting or not being picked up?
Lets say I use a SDK that has some custom components .ie. Liquid Glass but with neon background (just for fun) . This SDK has comprehensive API Docs . Currently Xcode provides DocumentationSearch mcp tool only for Apple APIs . Is there a way to enhance api search beyond Apple SDKs , for examples seeing the definitions in xcframwork.
If not can Xcode connect to local/hosted mcp-servers . Or do these have to be modified in Claude settings.